Engelberg Center Live!

GenAI & the Creativity Cycle: Can AI help everyone enjoy culture as a global public good?

Episode Summary

This episode is the “Can AI help everyone enjoy culture as a global public good?” panel from the Generative AI & the Creativity Cycle Symposium hosted by Creative Commons at the Engelberg Center. It was recorded on September 13, 2023. This symposium is part of Creative Commons’ broader consultation with the cultural heritage, creative, and tech communities to support sharing knowledge and culture thoughtfully and in the public interest, in the age of generative AI. You can find the video recordings for all panels on Creative Commons’ YouTube channel, licensed openly via CC BY.

Episode Notes

Brigitte Vézina (Creative Commons) moderating a conversation with Yacine Jernite (Hugging Face), Stacey Lantagne (Western New England University), and Nicholas Garcia (Public Knowledge)

Episode Transcription

Announcer  0:03  

Welcome to engelberg center live a collection of audio from events held by the engelberg center on innovation Law and Policy at NYU Law. This episode is the can a I help everyone enjoy culture as a global public good panel from the generative AI and creativity cycle symposium hosted by Creative Commons at the engelberg center. It was recorded on September 13 2023. This symposium is part of Creative Commons broader consultation with a cultural heritage, creative and tech communities to support sharing knowledge and culture thoughtfully, and in a public interest. In the age of generative AI. You can find the video recordings for all panels on Creative Commons YouTube channel, licensed openly vscc by

 

Brigitte Vézina  0:54  

All right, so hello again. I realized I didn't introduce myself at the very beginning. So Hi, I'm Brigitte Vézina, and I am the Director of Policy and open culture at Creative Commons. And I'm really happy to be moderating the panel to on culture as a public book. Good and what is the AI's role in all of this? So I'm really pleased to be here on this panel with excellent speakers and I say this from experience I've heard you speak and it's really wonderful to have you on this panel. Here is a theme during night from Hudson face and he has seen as machine learning and society lead at hugging faith, working on machine learning systems governance at the intersection of regulatory and technical tools. Their work today is focused on natural language processing and multimodal data curation, documentation and governance. And most recently as CO organizer and data area chair for the big funds workshop on large language models. It's raining here. Next is sec land teams. He's a law professor at Western New England University School of Law. She specializes in intellectual property law, especially of copyright and trademark law intersect with digital creativity. She's a member of the legal committee on the organization for transformative works, a nonprofit dedicated to providing access to and preserving the history of fan works and fan culture in a myriad forms, and among other things, the O TW Rumsey archive of our own, which is a website with over 6 million registered users, users hosting almost 12 million works of fanfiction and to my extreme left is Nicholas Garcia from public knowledge, public council sorry, Policy Council of Public Knowledge DC based public interest organization that promotes freedom of expression, an open internet and affordable access to creative tools and works. And Nicholas work at public knowledge is focused on emerging technologies intellectual property and closing the digital divide. So what brings us together on this panel today is that last year in September, a UNESCO which is the United Nations Organization for education, science and culture, adopted the Manya cough declaration and Mexico City. And that declaration for the first time elevated culture as a global public good. It also paved the way for culture to be recognized as a sustainable development goals in and of itself. Currently, it's not a specific goals out of the seven key one, it's kind of transversal. But there are there's a huge movement to make culture a sustainable development goal in and of itself. And so in looking at how AI interacts with this, we've seen efforts that could demonstrate that AI could reduce the barriers for everyone to be able to enjoy culture as a global public good, but at the same time, there is a risk. And we've heard this already a few times this morning that it could perpetuate cultural power imbalances. So I'm hoping to hear from different perspectives on how to get this right and how AI could concretely support culture as a global public good. And maybe just to get to agree on definition. What is a global public good? Well, a public good is a good that you know, benefits all members of society, society as a whole, but needs to be publicly supported in order to be sustainable. So I'll hand over to each of the panelists for your opening remarks and then we'll jump into class Since starting with you.

 

Yacine Jernite  5:02  

So I'm going to start with a bit more of an introduction of recent projects that are relevant to those questions. So huggingface is mostly a platform. So there's a role in moderating and creating AI systems that people are sharing, seeing how we can ship things that way, offering tools that help people document them and see where they come from. There were some comments before about knowing what the provenance who has been rotating, who is representing the data, and we've also been co organizing large efforts to help create a system in a way that more that hinges on consent, and I'm gonna burn out secretary. And so one of those examples was the big science project. very imaginative, very original name. And the idea there is we've worked together, over 1000, researchers on training one large language model. So it was after GPT, three was released, like this is cool thing. And this can be cool technology. But there are some issues we want to address with respect to consent, with respect to data curation with respect to governance and with interacting with current and upcoming regulations. So a couple of the choices that we made, there were for example, in dependency access, and transparency and interest. So we wanted a model that was multilingual, which wasn't the thing that existed at the time. But we wanted people to be able to control how they were presented. To each manage that we chose a set of languages that we're going to focus in. And then we had a policy that if we were going to represent the language in the training data, we needed to have first speakers of the language involved in the data correction. And that's a choice that says, okay, so everyone, right, we ended up with a model that didn't have any exposure to German, which is a somewhat high resource language and press like, why didn't you do that? And we said, because we didn't have German speakers are able to do the work of checking that the work was done, right. So we've been interacting a lot at this intersection of like, what's regulation? What's consent, and what's the possibility to control how they're represented. And other projects we had was the code that needed for ENCODE projects in something like codecs, or different copilot. And that was making some choices to train only on content of two or three promises to make sense. And then giving people a level of talent. So that's not necessarily a copyright issue, right? Like people have put their data on the web with a license that wrote any kind of for us. But we understand that there's a place to address consent beyond what regulation is currently providing, or figuring out how he's going to come into regulation. So let's just say that there are all of these choices, like one of the really big things, I think is important to remember about AI is that like, it's not a given, it's not the choice, there's tons of development choices that occurred that kind of balance all of these issues. And one thing I do is kind of like fighting those narratives of inevitability of the madness is what AI is like it is what AI is because that's how people have done so far. Because you can imagine to give more access and to into multiple surfaces.

 

Stacey Lantagne  8:01  

Hi, I'm so I don't really do anything with the tech side of things. I was an English major. So I have no idea about all of that stuff. But I do I am an intellectual property lawyer, as you heard at the introduction, and I do a lot of work on behalf of fan creators. And I'm afraid to further myself. I am not very old, but I feel very old digitally. Because I feel like I've been through this before now several times, right that when search engines work, I will date myself search engines became a thing when I was in law school, right. And so search engines became this thing. And we were going to be able to access all of human information. That's not where we ended up. We just put more stuff behind more locked doors and made everything as inaccessible as we could figure out how to do 20 years down the line, right. Facebook also came out while I was in law school, very eventful law school career. And that was like amazing, right? We were all going to be connected. And we're all going to talk to each other. And it was a very, very short road from there to the collapse of democracy, right? Like we just didn't end up where we thought we were going to end up with a lot of these. And so I want to be really excited and happy and hopeful about AI. And I'm really nervous that we're just gonna go down the same path, right. And I think about my sister recently had this experience, this is going to seem like it has nothing to do with AI. And in a way it doesn't. But I'm going to link it up I promise. She wanted to show her children The Sound of Music, which is a movie that we grew up with. And she wants to show her children The Sound of Music, and she could not find a copy of The Sound of Music for her to legally access on any streaming platform. And I was like G when we were kids, you know what we did? We went to Blockbuster. And we rented the sound of music, right? Like there was no and was there a cost and having to get in a car and going to Blockbuster. Yeah, obviously. So you would think it would be better for people like we could just put the movie on digitally and everybody could access these movies. They're not there even if you want to pay for that right like we have literally created all of the scarcity. That is the opposite of what I think we thought the internet was going to bring about that we would be able to access so much more stuff, think about how much more difficult it is for libraries to deal with their digital collections, because they are coming with all sorts of, of legal boundaries around them that actually don't exist with physical copies, when you buy a copy of a book, you can sell the copy of that book, and they can't do anything to you, when you are buying digital music, they're blocking it, so you can't do anything else with it, right. So none of this has anything to do with AI. But this is all to sort of just share my perspective that I want AI to be awesome. But I'm really concerned that we're just going to use it to appropriate culture from people and then render it more inaccessible than it been it already is. And so that's why I'm so happy we're having these discussions, because I want us to sort of be thinking about these things as we move forward. I don't know if it's fantastic.

 

Nicholas Garcia  10:56  

We had to say it was fantastic. He did so well to what I was going to talk about, which is that I want to talk a little bit more about public knowledge. Because we are a DC based public interest organization that has been fighting for many years for the open internet for all of the promise and potential that we see in technology for sharing for openness, for having more affordable access to creative works and tools. And what I was hoping to bring to this conversation so well on what what Stacey set up for me, which is talking about how those fights for the open Internet fights to close the digital divide, to make internet accessible to everyone to achieve universal service fight against digital discrimination that exists in terms of how broadband internet is deployed, in terms of how people are able to participate in the digital communities, and pushing back against the mistakes, frankly, that were made and the like enclosure of the web, as we moved from that early promise and potential of the internet into platforms, centralizing more and more value for themselves. So yeah, what I wanted to talk about was that if we're hoping for AI to be this force, or developing culture as a global public good, we need two really big things, the first thing that we need is we need to make sure that AI becomes an accessible technology that everyone is able to enjoy. In the same way that we have seen with the internet, that getting the internet out to people is what is critically important in terms of ensuring that people are equitably represented online, we need to do the same thing with AI, we need to make sure that it's accessible, that people have the ability to literally connect, that means solving things like the digital divide, that are still a problem and making sure people have access to the internet, the technology they need to engage with this. It means making sure that it's affordable that we don't right now a lot of AI products are integrated into things that people already have. But maybe some of those things you need expensive licenses for right now a lot of things are offered for free online in the same way that Google was free and social media was free. And then we started to understand that these things came at a greater and greater cost to us in terms of privacy in terms of surveillance, and slowly the quality of those things degraded over time, too. So we need to ensure that we maintain that affordability, both in terms of whether that's actual monetary costs or larger costs to people in society. And we need to boost adoption, which is that people need to understand why this technology is going to benefit them, they need to see that there are real benefits to them in the technology. And the trust needs to be built between people and communities and AI as a technology for them to wants to engage with it, and view it as something positive in building towards culture. And so the second thing that is related to this that we need is to build systems, both culturally and legally, that support AI development data policies, intellectual property, that builds towards that open shared culture that will create that trust for people to adopt and create AI systems that are inclusive and equitable in their in their data practices. And in terms of representing global public culture more, more fulsomely. So I'll leave it at that.

 

Brigitte Vézina  14:13  

So how do we do all of that? You've given us I think, wonderful aspirations, I guess is what are the milestones on the way to to enabling that kind of access that you imagine and the the sustainable systems that would that will need public support? So what are some of the pathways to reach that?

 

Nicholas Garcia  14:38  

I'll be happy to start since I since I started the issue. So one of the things that I do want to address is like I said, as we start thinking about AI, it's very exciting to be thinking about the and we should be thinking about the problems and then solution spaces within the AI realm but it is critically important to begin thinking as well about the fundamental and Lying things that we still need to address like closing the digital divide in places ensuring that people have access robust access to an open Internet. There are many places including outside of the United States, where practices like zero rating, and non bad non net neutral policies exist that ensure that people are pretty limited in what their scope of the internet is. There's countries where people's whole capacity for interacting with the internet is limited to the social media platforms that zero rip themselves in terms of their data plans and things like that. That's not building an open internet or an open future where people are going to have broad access to promote a global culture through AI, because everything is going to move through these kinds of gatekeepers. So both in the United States and outside of it, we need to ensure that we're like laying good foundations in terms of net neutrality of an internet, cutting down on digital, eliminating digital discrimination, and redlining in terms of how we distributed internet, all of those fundamental things are going to go into building like more inclusive data environments for people to begin engaging with AI. So I think that's one thing that we definitely have to keep considering.

 

Brigitte Vézina  16:09  

That's something you're building, right?

 

Yacine Jernite  16:13  

Yes, but what I was going to say is, I absolutely agree with that diverting resources from buyers that are already going down and are really important. At the same time, one of the most frustrating answer to give to the question that you started with is, it's a very multi pronged approach, like the only way we make progress is if we make progress on all of the cons together, and one of the cons is doing looking at where AI is trending, and where AI is exacerbating moral positions that already exist. One of the things that I'm very bullish on towards in that space is more transparency, more disclosure requirements, if we're going to have from these questions about what the systems aren't doing and what they're not doing. We need to know what they're how they're functioning. And specifically, I'm mostly like my heart's in data governance. AI systems are a system that RePEc was trained to regulate what they do without having access to the training data is like shooting ourselves in the foot, right? Like there's something I really want to push for understanding that those wonderful behaviors are super impressive interactions that you have with a GPT or with the cloud or whatever else is a reflection of human labor, or something that someone has done in the past. And that's something that needs to be more in the public eye and are seeing exactly what it is.

 

Stacey Lantagne  17:32  

Yeah, I think I think the transparency is, is important. And I will say with my lawyer hat on that it's a real challenge, because we reward companies not being transparent, keeping their algorithms, trade secrets and things of that nature. And so there's a real there's a, there's a real need for transparency and a capitalist instinct against transparency, right. And so I think that those two things are kind of at war. I'm going to say like, kind of maybe ridiculous statement that we will probably maybe never be able to achieve. I think we need to decouple access to information from capitalism. I don't know how we're doing that. Right. But I think that that's the heart of a lot of the problems is that all of our tools are made to maximize profit instead of made to maximize access to information or transparency or connecting humans, right. Like it's true. Facebook was supposed to be about connecting humans, but no, it's a company making people a lot of money selling all of us, right. Like, that's, that's what it's really doing. And we were obscuring that for a long time. So I don't have a solution to that. But I think the transparency is, is a good way to at least start or if we can't get to transparency, at least thinking about communicating with the communities that we're trying to reach out to, to get buy in from them about how they're going to be used. And I don't say used with like a derogatory thing, but just what's the what is AI doing for that, right like that they can see the concrete sort of benefit to them, rather than feeling like this is just another instance where people are going to come in with promise but take from us, right? I feel a little bit like we need to address kind of the distrust that is around maybe some of these systems. And I think like the idea of transparency is a really good idea. But that until we get until we address that distrust, I think it's like a long road to get the sort of AI that we want because you're going to have a lot of fights over the data training sets and things of that nature.

 

Brigitte Vézina  19:38  

Right. And do you have any examples from your experience where a AI could be an enabler for people to enjoy culture as a public good, and examples

 

Speaker 6  19:49  

where it can be a hindrance? So I think we can see both possibilities in the future. So I will make sure

 

Brigitte Vézina  19:56  

that AI enables people to to access that info Imagine that it becomes a tool to further that important fundamental right? That enables people to participate in cultural life in ways that are probably not possible because of the current copyright system or the capitalist society in which we live in. So is there a potential for AI to, to change?

 

Yacine Jernite  20:22  

I think even with the technology we have, right now we can make some really good tools, like imagine you have a scan of hearings, and you want to get a sense of what it looks like, like this is fantastic application for generative AI. On the other hand, because of how it's trending right now, like, AI, I mentioned that some people today is a rage machine, the way it's trained, because there's this instinct for anyone who's going to want to build a good system to get all of this data that they have access to, without thinking about what biases it has, and whether we're going to forget, you're going to have a reconstruction, probably of your ruins that correspond to come on current cultural beliefs about how things work at the baths, right, you're going to put some very strong bias. So it's both something that can be fantastic. And something that's going to move us further away from indices, doing good science or doing good historical ways in a way that's very subtle. And that I think we need to better understand.

 

Stacey Lantagne  21:28  

I mean, I would agree, I think, I think AI is a tool is basically neutral, right? It's just how we use it. And I think that yeah, I can see many ways that it can enable fantastic access to things that we can't do otherwise, that already has helped us, right, like developing these kinds of systems. So yeah, I feel like I said a lot of negative things. And I don't want to say like, I totally see that this could be a really useful tool. But I think it's up to us and other AI systems at this point, right, like what we do with them. So yeah.

 

Nicholas Garcia  22:03  

Yeah, I mean, it's a great point, I'm thinking particularly of the fact that we definitely have a long way to go in terms of bias and discrimination. There's amazing work that people have been doing already for a long time. And I want to lift up, in particularly like there are women and people of color. In the United States, we've been working on algorithmic accountability long before this current wave of AI hype, in terms of trying to think about through those difficult problems that we're going to have to contend with in terms of making sure these tools are really equitable and good for it being something that cuts across cultures. And at the same time, I do want to get excited about the potential for some of the things that this technology promises for us, I mean, looking at even something as what seems simple now to people as like language translation, that has seen huge leaps and bounds forward, thanks to AI technology is a pretty amazing thing. If you take a few steps back and think about how close we are getting to like really robust the ability to do like Star Trek style universal translator, on demand, very inexpensive. language translation work is just a huge potential game changer for society and for sure, global culture for things like that. So I think there's a lot to get excited about. And we should stay excited, even while acknowledging that there's a lot of work to do. And we should look to people that have been thinking about how to do that work for a long time in terms of embracing solutions that people are already

 

Yacine Jernite  23:38  

think maybe something I said we'll add is one of the most damaging trends we have in AI is this idea of like one model with about one model to do everything, or using generative AI because it's impressive to do some of things, lots of things that need reliability. There's so much we can get by making it easy for someone who has a data set and knows what their inputs and outputs is to be an AI system. That's not going to be the most efficient way to be it's not going to take again large numbers, we're gonna try to make it work but that is going to have maybe a bit of in house competency but not that much as getting sharing of the work that's good for America. But keeping control of your data, keeping control through this case and keeping control of our European

 

Brigitte Vézina  24:25  

basically, I wonder if you're in law school in the Creative Commons started,

 

Stacey Lantagne  24:29  

probably all the good things happened. It made my education of dubious quite like value as soon as I graduated. It was dated. But yes.

 

Brigitte Vézina  24:40  

No, the reason I'm saying this is that Creative Commons is at the foundation of this open infrastructure that enables open sharing today. And our licenses are essential to that scaffolding for these exchanges of culture and knowledge and information to happen on the open web. And I wonder if you see a role for generative AI here to help realize not only access, use and remix and all the great things that, you know, the early days of the Internet promised us, but that it can sustain this infrastructure that is at risk. Because of all these other models that are, you know, threatening the way that we want to see a public infrastructure flourish. So I wonder if you have come across some examples of AI being able to not only provide that access, but also really support in a sustainable way, the essential infrastructure for all of these exchanges to be possible.

 

Nicholas Garcia  25:48  

Yeah, I'd love to, yeah, love to weigh in on. So for, especially for thinking about our intellectual property system right now that we are dealing with him in the face of this, what seems like a big change to a lot of people, which is aggregating data together into datasets in order to do this training of models is, you know, sending people spinning in many ways in terms of grappling with how to do this. And in terms of building the digital infrastructure to think about what we need to do with that. It's put up a lot of questions. I think it's great that you've seen this here, because talking about opt outs in this, the responsible data practices is like a critical component of all of this. And I do hope that there will be this virtuous cycle that goes into AI models that are well trained and responsibly designed, that then are able to use and enable better infrastructure for doing the same thing. And so I'm thinking about things like helping people understand pick CC licenses, using AI, for example, would be like a great example of having people understand like the use cases for all of these different things. And that would be ways that we could use this technology itself to help promote that infrastructure of global Sherman. That's a really exciting possibility. I think another thing that is worth getting excited about is that as people are seeing that there's a value in aggregated shared cultural data and information. This is an opportunity to revitalize excitement in these original principles of openness and sharing, and digital public infrastructure. And getting people excited about the idea of maybe we should be looking at how we build things collectively, and understanding that value of a public good again. And this is an opportunity to do that, instead of making it a moment where we shrink away from that we shouldn't be looking at it as a chance to engage people and show them that we can build things together that are way better than anything we can build and deploy.

 

Stacey Lantagne  27:54  

Yeah, that's right, I think. So I think about projects, like the curation test project that we've heard a lot about, were helping people being able to search art better and more accurately, like, that's great. I've heard people talking about trying to make it easy to figure out if something you want to use is in the public domain, or what kind of Creative Commons licenses around it, that would also be really great. We have a lot of barriers that are set up when people can't figure out the copyright status of things. And so it'd be wonderful if we could use AI to help us figure that out better. But I think along the lines of what has been said, actually, were both of you that it's the AI is is as good as data set and what is training materials are and that we really do need to reach out and, and there's a tension between. You want to have the best data set that you can get for the training materials, but also not wanting to steal people's stuff to make your training materials, right. And so balancing how, how can we how can we maximize how much we have for the training materials? And I think a lot of that is, is outreach. But I think I think Amanda brought up on the first panel. Also, though, thinking about taking things out of their original intended audience is something to really consider that I work with fan creators and our data got scraped, I think it was Abby, who said if your data is out there, it's been scraped and put into AI. Yeah, ours is out there. And we had a big outcry about it. And a lot of them felt very personally victimized because this was work that they had put up. And yes, it's a public community. And we talked to our users all the time about like on the internet, it's public, but they felt it was part of a community that understood the context of it that lifted it up that celebrated it that supported their creativity. And now it's been removed and taken into a context that they feel they've lost all control, and they don't know how it's what's going to happen to it next. Right. And the truth is, they always had no control because they had it on the internet, right but but, but it's just a way of thinking about there are some corners of it. The Internet that still feel very isolated and niche even in the internet, not even talking about like real life. And when you're just crawling all corners of the internet and bringing it into a centralized place, that's something to think about, too, is that you're removing things from context, maybe removing things from the audience that was intended for, and we should, I know why we're doing that we want the best data sets, data sets, but it causes like that lack of trust that then you don't get the buy in, and then you get worse and worse, because then people start locking up their stuff, and you get worse and worse, and it becomes a snowball effect. Right. So just something to think about as we think of the awesome things that we can do with AI.

 

Yacine Jernite  30:35  

I really want to jump on that because it brings together someone mentioned licensing in the previous panel, some of the limits of where AI is pushing, right. And some of the biases. Where I think is touching a lot of how we think about those questions is by that level of generalization, right? Like people will go pick a data set, they might have an intent filter set, they just might be making a dataset, and then people use it for a specific purpose. And then we have these breaking points where like, there's a reason for the person making the data sets that maybe didn't do anything wrong with respect to what the originator wanted. But then who's responsible when someone else does it, right? Because they've already facilitated. One anecdote that I really like sharing is like a year ago, I did an hour and a half session on data governance with teenagers. And who had a lot more to say about it than some of my colleagues. But the responses were extremely gender, right? Like, the question is, like, if you put something in the internet, is it free for anyone to use? And I have to say that a lot of the boys were saying, yes, of course, you did it like it. Now it's your problem. And the rest of the girls were like, No, I put it like for that purpose and to share with my friends. So that's really the alliance with some of the ways that tearing people Yeah, of course, everybody agrees that it's free to use that it's fair use with a very, very rough definition of fair use, tends to be because like, the people developing our industry, because they first crew, and they're efficient, we're around with them. To get a bit more specific, I think there's a lot need to figure out in terms of licensing that arose for openness, but still has proposed encoded in it. And we're still figuring out what that is. And obviously, like Creative Commons is one of the organizations that started from analyzing that. I was making

 

Brigitte Vézina  32:35  

a lot of new ideas, I think, I want to thank you for sharing your vision, and also sharing a few like very concrete ideas on how I could be super helpful to make sure that everyone can enjoy alternative public good. I want to open the floor for questions. We have about 10 minutes, microphones coming your way.

 

Speaker 7  33:02  

Hey, thank you so much for the conversation and have a question that's on the side of so if we think of AI in culture as a public good, there is a governance conversation is really important, also about the governance of AI. So it's not just the question of how can AI help us enjoy culture? But it's also how can we govern AI such that we can have the culture that we want? And I'm curious to hear about any thoughts you have on lessons from thinking about data governance, from thinking about Commons governance, in other aspects of digital public infrastructure, Ferdous moments where the conversation about generative AI governance is taking a specific shape. I'm thinking, among other things about this strange new conversation about red teaming as a public oversight thing. But also like specifically, like inviting, like, here are some lessons from hard thinking and spending years thinking about licensing thinking about public public interests, and you know, like, well governed public interest projects in this space. Here are some lessons for this strange new oversight conversation. I'd love to hear if

 

Speaker 8  34:07  

anyone wants to jump in. I mean, Brennan. Yeah. So

 

Nicholas Garcia  34:13  

one area where I think we can take some concrete lessons on this from actually comes from other areas of telecom, where there have been things like public interest regulatory standards, for for a long time, there were public interest requirements and store public interest requirements. For broadcasters in the United States, there have been different kinds of public interest requirements for cable operators. And it's worth thinking about what those how those kinds of regulatory structures could exist for AI companies. I think it's also worth thinking about the possibility that it's not a done deal that AI must be the domain purely of private actors that there is room for public institutions still to get in on this technology. And we should think about if it is really is such a critical piece of the digital public infrastructure going forward, what role the public sector plays in developing AI and playing into the ecosystem in big or small ways in order to ensure that we have that governance that is built into our systems of government already directly involved in the system. And that would be a way to ensure that we're protecting our values. A really small example of this, but I think an impactful one is that the United States is thinking about creating a national AI research resource, which would extend United States is like public resource pools in terms of computing and access to different resources, to public institutions to do research and and what they're building into that process is a preference for projects that address issues like civil rights issues, like bias and discrimination issues in terms of how to address accountability and oversight and AI. And building those things into the structure of how you even do the funding of research is a valuable way to like build in the principles and values that we care about in our governance directly into the AI ecosystem. We need to think more about ways that we can do things like that.

 

Stacey Lantagne  36:12  

Yeah, I, I agree. I think I think I've never, I've never seen it done perfectly. And I think that's because it's really difficult to do. And so I think, first of all, we don't want to let the perfect be the enemy of the good. Like, if it's good enough, let's do it, right, like, let's just make a move, let's just not. But at the same time, that's scary, because you don't want to make the wrong move. Right. And I do think that I want the government to be involved, I want it to if this is a public good, then the government is in theory, our steward of public goods, like that's supposed to be what the government is doing. And so the government should be involved. But I also think we really need to balance the government with also the private interests, which I know I've been like fighting against capitalism. But I think that, I think back to like, how long it took cable competition to happen, right? The government was like, no, no, no, like, the phone companies can't be competing on cable, we cannot and they like stood away for a really, really, really long time. And they get out of the way, and all of a sudden had all this movement, on on on cable prices and all this. And so it's like, I don't want to have like this, why say it's difficult to do? Well, like you have to hit this like weird middle of the road balance, which is hard for humans to do, we're just really hard at maintaining balance, just always, like, I just used that word, and you're thinking in your head right now. But I think you're out of balance within your life, right? Like, it's really hard for humans to do that. And so you just always have to be, you're going to be tipped one way or the other. And you just got to keep doing, you know, adjusting I think as you go along.

 

Yacine Jernite  37:48  

I think like, as a machine learning person, that would be more humility for machine learning people. If you talk to anyone who has been working on security for the last few decades, and who are like, this is not what 20 mcmains, like we have studied how to make things safe and secure. Don't reinvent the wheel, we can do a bit more a multidisciplinary practice exam.

 

Brigitte Vézina  38:09  

Yeah, and I want to jump on what you just said, because as you as you said, this is going to be a multi pronged approach. And I think the reflex so far has been to look at copyright as the governance model, and almost exclusively copyright. And when you think copyright and openness and you know regulating access and freedom of choice, well, Creative Commons, comes to mind. And so we've been asked a lot like could Creative Commons build that new governance model, around access to works to train AI about around defining the copyright status of AI outputs. But I want to say that copyright is only one lens to look at these issues. And it's a very blunt tool. So it's, it doesn't really get to the subtle nuances that we need to regulate this properly. So I think it's really important that we take a step back and look at all the other concerns that follow really outside the scope of copyright, and throw them into the mix of the governance model that we want to build. So, you know, we need the social net for, for artists, we need, you know, an ethical framework that goes beyond the rules of copyright, because we want to make sure that these decisions are good and not evil, right. That's the basic principles of ethics. And we also want to foster community norms. We want to see how the community can sort these issues through practice and through experience and a Creative Commons, it's really important that our society or community can come together and discuss and find solutions to these issues so that we can have a governance system that reflects the true problems that they're facing on the ground. So I think this multi pronged approach is central and thank you for that question. Another question? Yeah.

 

Speaker 9  40:00  

Thank you very much the whole panel. I'm questions mainly for spacey, but for kind of everyone, and but I want to start off by saying like I think some people who are like I'm talking to him as a fandom person, right as well as various other things. And I think people don't want outside that kind of fandom don't really get quite how radical it is. Because places like archive of our own as a response to media that is that is produced for public consumption are, to my mind was the one of the purest examples of counterculture that's ever come out of the internet. And, and it still exists, it is literally counterculture. And the copyright architecture of that is, is central to how that community in that culture is fostered. And, obviously, fandom existed. Fanfiction existed way before the internet, people would talk to people who were just, you know, photocopy and stapling, zines, the Kirk Spock fanfic, we would go to conventions to, you know, to exchange these in the 70s. But the internet made that a community and a culture in a completely different way that was in remains radical, like, as a British person who was non binary, like, I kind of want to have a T shirt that says like JK Rowling doesn't, I think JK Rowling doesn't have the rights to any of this. But like, but my question is, that I've just loved the panel to think about what kind of countercultural opportunities genuinely countercultural opportunities we might not be seen. With, with the, with the rise of generative AI? Because I think it's like, nobody, nobody predicted that happening with fandom and people really didn't notice it outside that culture. So like, that's the question. Can we imagine anything like that? Yeah, maybe not right now.

 

Stacey Lantagne  42:00  

And when you, when you think about it, and you think about history of fandom, like there's always been groups of people who have been pushing it forward against a lot of resistance, right? And the success of a oh three and OT double, which honestly, is rocky all the time. I'm I'm really committed, it's always it's always an uphill battle over there. But their work, it was very carefully cultivated to, to encourage this sort of counter culture that didn't have a place to grow. It wasn't being nurtured, right. It was kind of just like always, even on the internet, didn't have a space for it to be safe was always getting thrown off of places, right. And so you're right, that those things happen. But if you allow them space to grow, they happen better, right? They happen in better ways. And it's funny that you said that, because as we were sitting up here, one of the points that I actually wrote down in my remarks is we keep talking about culture. Like it's this like behemoth thing, and we all agree on what the culture is. And actually, we all belong to many married tiny cultures, right? Like a you talk about fandom, and I can stand up here talking about fandom, and I bet you that if we were in like a fan convention together, we might not even overlap, because within fandom, there's like a million other different types of cultures. Right. And so yeah, I think, and I think that that is part of the fan reaction against. And I fancy, I'll speak with one voice who just said it's like very plethora of communities. But there was a lot of outcry against their works being used for a dataset for a chat GPT. And I do think that part of that was, we feel like a culture that has been maligned and belittled and mocked and we found a place where we can grow and I just gonna steal it and bring us right back to the to the bigger culture and flatten this out, right that at the end, what we end up with is like a flatter version of culture. I don't know if that's like, what's going to happen. But I do think that just just just talking about culture, I think opens question of like, what culture who is culture? What are we talking about? Hear it in the first place? So yeah.

 

Yacine Jernite  44:02  

agree with that. I would say also that we don't need to predict where to go from home to go to bed as long as we make space for people to be on projects. One example I keep going back to is Wikipedia is that amazing resource that people are using right now. I don't think it would happen right now, like the one who started Wikipedia right now because of some of what he referred to about like the programming of the internet and having decided that people can keep building their own AI system controlling the whole development chains and having the power to decide what's going to come in at each of those stages. I think it is scary to leave room for those projects to emerge within if

 

Stacey Lantagne  44:47  

I literally wrote down with the PDF is my one my one example of something I thought turned out better than I thought it was going to turn out. And it gives me like I literally wrote it down like it gives me so much hope like in the early days everyone's like don't go to PDFs the only place on the internet I trust, right? Like people are actually like figuring out what's going on the PDF. So yeah,

 

Nicholas Garcia  45:07  

I'll just say really quick, I think one of the counterculture opportunities that exists with AI goes goes towards what I was talking about in terms of both how AI can promote, like a culture of openness and sharing, again, in terms of people seeing the collective benefit, but also a countercultural thing that could exist in is very related to fandom is this idea of people really coming to terms with the fact that all culture is built on other culture, which has always been a core element of the whole sharing and openness community. But AI is maybe like laying bare some of this in a really radical way for people that they're seeing that, you know, culture and creativity is built on other culture and creativity. And there's a real counterculture opportunity there to really engage with that idea and pull it to the forefront and stop pretending like stuff just springs out of people's heads or more accurately like movie studios fully formed.

 

Brigitte Vézina  45:57  

Thank you so much for the questions. Thank you for the panelists. I think you laid the foundation for more Conversations in the following panel. Thank

 

Unknown Speaker  46:05  

you for your contribution

 

Announcer  46:14  

the engelberg center live podcast is a production of the engelberg center on innovation Law and Policy at NYU Law is released under a Creative Commons Attribution 4.0 International license. Our theme music is by Jessica Batke and is licensed under a Creative Commons Attribution 4.0 International license