This episode is the “Diversity, inclusivity, sustainability, and cultural identity - what role for AI?” panel from the Generative AI & the Creativity Cycle Symposium hosted by Creative Commons at the Engelberg Center. It was recorded on September 13, 2023. This symposium is part of Creative Commons’ broader consultation with the cultural heritage, creative, and tech communities to support sharing knowledge and culture thoughtfully and in the public interest, in the age of generative AI. You can find the video recordings for all panels on Creative Commons’ YouTube channel, licensed openly via CC BY.
Matthew Allen (BRIC Arts) moderating a conversation with Allison Sherrick (METRO), Kengchakaj (elekhlekha artist collective), and Jocelyn Miyara (Creative Commons)
Welcome to engelberg center live a collection of audio from events held by the engelberg center on innovation Law and Policy at NYU Law. This episode is the diversity inclusivity sustainability and cultural identity, what role for AI panel from the generative AI and the creativity cycle symposium hosted by Creative Commons at the engelberg center. It was recorded on September 13 2023. This symposium is part of Creative Commons broader consultation with the cultural heritage creative and tech communities to support sharing knowledge and culture thoughtfully and in the public interest in the age of generative AI. You can find the video recordings for all panels on Creative Commons YouTube channel, licensed openly via CC BY
Matthew Allen 1:00
Good afternoon, everyone. Can you hear me? Wonderful. Just real quick Mic check with everyone just real quick.
Hello. Check one, two.
Matthew Allen 1:10
All right, wonderful. Good afternoon, everybody. My name is Matthew Allen. I'm a community produce liaison at the Brooklyn free speech TV and podcast network as part of brick arts media Brooklyn, was part of the downtown Brooklyn Arts Alliance in Brooklyn, New York, Brooklyn quite a few times in their cycle. But yes, my job is to be an advocate for community producers at the media center of brick which is a public access station in Brooklyn that gives people in Brooklyn an opportunity to create their own television shows and podcasts that we broadcast for them throughout the borough and five boroughs. And I am your moderator today for a very interesting topic. This is diversity inclusivity sustainability and cultural identity, what role or AI. Now, throughout today, and probably the last year almost, there's been a lot of talk about generative AI. And a lot of that talk is geared around, you know, creativity, money, technical issues. And as our last panel discussed a lot of legal and ethical ramifications when it comes to copyrights when it comes to intellectual property. But cultural heritage and cultural aspects of how AI is being affected or affecting things is a topic that doesn't really get discussed much. So I hope that we are able to dive into a little bit today. So this panel will focus on the pneus of the necessary interplay between labeling of cultural heritage materials, and the creation of data sets for ml AI with a particular view on emerging practices around ethical sharing of cultural heritage. So we have a wonderful panel that we have for you guys today. We're going to start here on my right I'm going to introduce this young lady. She is the digital projects and services manager at the metropolitan New York Library Council where she works with metros digital service teams to extend metros repository hosting metadata, and support services for digital collections and custom digital projects development. Please welcome Alison check. I said please welcome marason. Yes, to her right. We have a gentleman here he is an award winning pianist, improviser and composer, electronics experimentalist and one half of the artistic collective electronica and his name is Kengchakaj .
Our final panelist today is manages the open culture program overseeing projects, logistics and communications that support the open culture community for Creative Commons. Everyone please welcome Jocelyn Mariah may our pardon me? Yes. So I like to break the ice a little bit with each of you could start with you, Allison, just tell me about how all of you can answer the same question. Tell me about how you interpersonally interact with AI every day, whether it's professional or personal.
Allison Sherrick 4:45
So in my work at the metropolitan New York Library Council, we support the open source development of archipelago Commons, which is a repository platform as part of that work. For the past several years we've been in our standard deployment using a natural language processing container. And as part of our everyday like workflow pipeline in the repository environments, we do post processing on the each OCR extracted text VGTs other kinds of text files to do natural language processing, entity extraction and sentiment analysis, we have a separate interface and our deployments or where the the
Unknown Speaker 5:29
NLP extracted entities or
Allison Sherrick 5:32
Keynote, we have classes for that. And then that information is kept distinct from the human mediated or human curated metadata descriptions. And in our upcoming phases of work, like moving forward this year and into the next year, we are beginning to use Image Analysis tools as part of a potential internal cataloging and metadata description pipeline, repository environments and then in my everyday life, I probably use it way more than
Matthew Allen 6:05
getting same question to you.
Yes, I was going to go ahead with that way as well that probably a lot of things that kind of everyday life that I'm kind of unconsciously, that kind of powered by AI somehow. But also, I guess, in my work, I, this is probably a disclaimer, in my own creative work. We haven't really used AI yet. And one of the reasons we kind of hit the wall, and we can go into that later. But that's maybe that's for now. Thanks,
Matthew Allen 6:40
Jocelyn Miyara 6:40
Sure. So we talked about AI a lot and organizing things like this. We are really thinking about how we can convene communities and different perspectives around AI. In terms of using AI at work, I'm not doing a lot of it. For fun, I played around with GPT. I've played around with my journey, but haven't really incorporated it into my day to day yet.
Matthew Allen 7:06
Just for transparency, I have actively tried to avoid using AI. In my lineup is in my line of business as a TV producer, TV advocate and a music journalist. I've been sort of like trying to avoid using AI, particularly databases like chat GBT. But full disclosure. I did use chat GBT to assist me in naming conventions for a conference that I created and produced for at Bret called the brick media maker weekend. It helped me to sort of brainstorm naming conventions for workshops, as well as this year's theme, which is actually about AI. It's called AI evolution, media innovation and disruption. So there we have that, please don't tell anybody. So I'd like to get things started with issue. Each of you can answer this. But I wanted to just to talk about just the aspect of cultural identity in AI. Jocelyn, and I will start with you, because you have an extensive background when it comes to the what are some of the issues raised by AI around culture identity?
Jocelyn Miyara 8:21
Sure, happy to speak to that. So I think one of the things that we hope to see on the internet is ourselves and ourselves reflected. And one of the things that's really challenging about algorithms and the way that AI algorithms work is that they often play to the average. And so when all of this information gets put in, you end up seeing outputs that are often normative. And that leaves out a lot of people, I would say there's also a challenge when it comes to input. So all of this information, all of this content that has been created by all of humanity across time, already has its own biases, biases, so you're putting all of that in, and then you're creating norms, and more biases come out. So I think there's a real challenge when it comes to this idea of wanting to see yourself reflected in these models where they're often really playing to an average and ended up kind of showing a normative presence instead of the diversity that we see around the world and in ourselves.
Matthew Allen 9:26
Yeah, that caused the mining issue I was I wrote about in an article recently, while we're at Capitol Records, very briefly, gave a record contract to a computer generated rap artists. I don't know if you guys have heard this story. But last year, a computer generated rap artists called I think mn Mecca, or MF Mecca was created and it was swiftly dropped less than 48 hours later because it was dealing with a lot of racist black stereotypes of this sort. racially ambiguous rapper. So the fact that non black computer generated created a racially ambiguous rapper who had all these negative stereotypes is profusely using the N word in its raps that cost the mind what you're saying about the norm and normative way of, you know, producing AI and sort of leaving out certain cultures in terms of trying to shift and be representative of it. So it's a very good point.
Jocelyn Miyara 10:30
And I'd love to add in terms of the sort of cultural heritage sector, and inputting all of that information, you know, these wonderful collections that we have around the world are beginning to digitize. And so there's all this wonderful digital material from galleries, libraries, archives, and museums. And much of that comes from a colonial context where these institutions originated with folks who are traveling the globe, and so excited to see these different cultures and then collecting them. And so the way that those items were collected, and the way that they're described and showcased, can also have have problems. And so when you feed all of that into AI, there are some considerations when it comes to sharing.
Matthew Allen 11:10
Now, another thing when it comes to cultural identity is the ramifications of what AI could mean. And I let anyone answer this particular question. I was having a conversation with someone from Microsoft a couple of weeks ago and talking about what are some of the ramifications of AI and he was saying, yes, a lot of people may lose their jobs when it comes to AI. And then it'll also open up a lot of jobs for people on the back end also. But when you consider that there'll be a lot of people from disenfranchised or underrepresented communities that could lose their jobs in the process, and not have the kind of opportunities to get the new jobs that AI will get. On the back end of that. What do you got? What do you guys is? remarks or comments about that particular subject? Allison, if you have anything that
Allison Sherrick 12:02
related to that, I think it's come up a couple times earlier today. But I think, because there's a there really is a severe lack of transparency on the public knowledge around the practices and development mechanisms for some of these very popular large AI tools, their data sets, their methodology, their internal policies about how they select and potentially deselect materials for inclusion in datasets. And then I want to relate it to that one of the big things that I always think about is how many of these large, these large tools have been trained on datasets that were created using microwork, or ghost work for already extremely exploited and marginalized communities outside of our western like, you know, our systems are outside of but you know what I mean? And how is that going to be compounded again, and again, if we keep using these tools without understanding how they've been developed, the people who put in the work to create the underlying data sets, and then what we're what we're going to compound over time?
Matthew Allen 13:14
Again, I have a question for you in terms of the artistic aspect of AI, could you just speak to just some of your views about from a creative and artistic community standpoint of what some of the biases and inclusion and discriminations can mean for AI? Right,
um, I think maybe I could start with the work that we do like the we we have this project called J. And we, our project is about kind of relearning this tuning system from Southeast Asia that maybe kind of lost in time sometime soon. And I think when when we start working on that, one of the things that we think about is maybe we can use AI to help us find that narrative or find that last voices. But also what we find is, first, there's no this kind of data set existed at all in the so we have to kind of start from scratch in order to like, be able to know all this data set, but also, once we looking into that, it's, it's a lot of work for just two of us to work, but also, it's how we are going to approach it. For example, this is one thing that I because I'm trained in western world. One of the things also let's say, how are we coin those pitch information names? Are we going to use selfish are we going to use Deray II as a as kind of like input, but also what happened in real life as well is those some of the note name that were supposed to be called, has disappeared as well. It's been gone. And it's kind of like one of the thing that we kind of assimilate to Western culture. For example, we don't have our note name anymore. We also in Thai pedagogy, Thai, traditional music, pedagogy, we also use Doremi, for example, are we gonna keep using that? Or are we gonna come up with a new way? And who am I? Or who are we, that gonna come up with that new name? And should be consulted tradition? And how, and just that there's a lot of things to go dig deeper into that. And I think that's not us, not our Not, not just like, a few people where it's kind of like maybe a systematic kind of work that needs funding and all that. Yeah.
Matthew Allen 16:08
Thank you very much for that. That's very, very insightful. Those for those of you those of you don't know Doremi, Faso Lotito, that's my sound, sound of music head heads over here. Doremi, that's C, D, E, you know, on the scale, you know, part of music theory, you know, they were me, Fossati, C, D, E, F, A, B, you know, but sorry, I missed a G in there. But, um, but that, but you're right, that is a very much European way of thinking about music theory, when other cultures don't always employ the use of the pentatonic scale, or the use of the circle of fifths in the same way that people like Mozart, and Chopin and Beethoven, did that get transferred in here in the west and have for many years, and that leads to this idea of education and access for the everyday citizen. And I'll do this back to you, Jocelyn. Just in terms of what how we can, you know, prevent sort of the discrimination of biases via education. Because a lot of people are going to begin to want to use AI in everyday lives beyond just using things like chat GBT, or using something that they put on YouTube to make Frank Sinatra's voice sing get low by Little Johnny side voice. So tell me about how you know how important it is to educate people on the proper way to use generative AI to sort of prevent any sort of cultural discrimination surrounding AI.
Jocelyn Miyara 17:49
Yeah, it's really tricky. You're starting to talk about this, this categorization and the way that the Western world categorizes certain things, and how challenging that can be when it comes to a global perspective. And I think with AI, that's what we have is this sort of, in a very specific flavor, it's what's attempting to be a global perspective, but not necessarily always achieving that. So I was just reflecting on this, this music notes as a means of categorization, similar to how we categorize things and collections and how there are these labels that are put on collections items. And just for this wonderful example, from a creationist about a jar that had been, I believe it was at the Smithsonian Institute, and it was just kind of retreated, and the metadata was retreated, because it was this jar that didn't have a whole lot of prescription. It's called the jar by Dave. And they were able to go back and figure out that the Save had a full name that they could add to this collection. And it was only labeled slavery, and they put in African American and they added more context added in, that this person was also a poet, because there was some poetry on this bus. So it really got me thinking about how, you know, it takes such care and treatment and work to go back into that record and to rethink it, and to bring it into a modern context and to take care of that. I think the same thing could be done with AI outputs, but I think it takes care and work and I don't think there's one way I think it's just being aware of what that work is coming from and what context it has. I think it'll be
Matthew Allen 19:32
really tricky. Yeah, context is, is very important and nuance and subtext. It's extremely important in all aspects of life. Particularly something as complex as this. So Allison, oh, you know, rephrase the question. Same same for you. How can AI be a tool to amplify efforts for greater diversity, inclusivity and sustainability?
Allison Sherrick 19:56
I think it can be a tool in our wider toolkit that is more multi disciplinary and socio technical, I think we have the chance to, to remediate, to continue to remediate our, our metadata, our descriptions, and to do analysis on our legacy collections. We know there's voices missing from the historical record, we know there are perspectives and experiences that are not present in cultural hair or not well represented in cultural heritage collections. So I think an area of potential benefit for AI tools could be using the analysis capabilities at scale, to analyze our collections and identify areas where we need to improve and also identify, potentially like a terminology that we can remediate, update, make more inclusive. And also, I think, while we're doing all this, at the same time, I think we also have a responsibility to preserve how we did things historically, so that we're not wiping away a historical record of wrongs that we did.
Matthew Allen 21:08
Yeah, one of the key things that you talked about was the multidisciplinary and aspect of it as a 15 year veteran of regards media, I can tell you that multidisciplinary Arts is extremely important in my line of work, where we deal with community media, we deal with contemporary art and performance art, and music, and particularly, so gang, I'm going to pose this to you telling me about from your perspective, in terms of fusing, you know, multidisciplinary ways that we can incorporate AI, can AI be used to democratize creativity, in art, and in a way that empowers a lot of marginalized groups?
Maybe I can start with kind of my own personal experience. Yes, please. Like, like we said before, we haven't really used AI yet. But we start to use, like, algorithmic generated music, in our own project. And in one way that I learned also like, because of like, we have to go back. And it's not only like the name node, or Doremi, but also like the pitch information itself that are different. And it's kind of like, also forced me to go back into all of this, and kind of like, read, relearn and unlearn my ears, my Western ears. And by doing all of this algorithmic work, I slowly kind of like unlearn that. And also, in the beginning, maybe you, you might hear it as it out of tune still, but right now, and hopefully, in the future, if we have a chance to present this work more, this could be normalized in a way that you might be, oh, this is just another sound that you, you will know. And it's fine. It is not out of tune. And I think one of the things that maybe that could help. And another thing that I can think about, maybe democratize this, not sure if it is related to AI, yet on or maybe in the future, but also another work that we do. Another project that we do is called the gong ensemble, ensemble, and Gong samba with this project inspired from an edited indigenous practice you find throughout Southeast Asia, where each one of the ensemble have only one Gong. And they have to kind of play together and kind of listen to each other. And in order to create this one ritual music. So we start doing that with instead of like actual live actual DOM where we start doing that with live coders, and kind of like give them specific pitch to do that. And in a way, I feel like we we kind of pause because you only have one sound, you have to play with me in order to create this music. And we also ask audience to join in with whatever they have in their pocket or found object. And in that way, we kind of create this collaboration between machine and human and kind of like matching also lose agency, human also lose agency because none of us could do it. Just my own. So just your own so it's kind of like trying to force that collaboration to happen. Not sure if it's connected to AI, but hopefully it democratize this agency a little bit. Yeah.
Matthew Allen 24:53
I want to get back to that idea of collaborative collaboration with AI soon But you did speak to something that I wanted to get to. And I asked each of you all this to chime in. And you talked about indigenous art. One of the issues that a lot of people particularly me have with generative AI is a sort of replacement, you know, the fear of it replacing sort of these tried and trued traditional ways that we create things, particularly when it comes to music, and art, which plays a huge part in a lot of these underserved communities that, you know, we're talking about today. Now, in terms of AI, there's a fear that it could commodify or replace outright, you know, the process of creating indigenous work. How do we push back against that, and to assure people in number one, in an effort to normalize AI in a positive way, and also to reassure the public, that these indigenous traditions and indigenous arts can be enhanced, or moved forward using AI rather than being outright outrightly replaced.
Jocelyn Miyara 26:13
I'll just plug here. And there's a really wonderful set of labels called traditional knowledge labels that are made by local contexts and are applied to traditional knowledge works in order to advise people on how to share them. So for example, there might be a label that says that this is a work that's meant to be seen by women, and so that an institution can share it in a way that that enables that preference, I think it would be really important to kind of apply that to the way that those works might get ingested into AI. But I also just think it's really important in terms of outputs and thinking about what might be something that seems like an indigenous work of art maybe isn't made by an indigenous person, and being cautious about that I would love for my co panelists to add more awesome,
Allison Sherrick 27:10
I think a place to engage would be to actually ask for permission, and, and interaction and collaboration and say, Do you want to participate in this process? How do you want to participate? What do we need to do differently for the indigenous communities? One thing I did want to talk about related to this topic also is that for some of the language model trainings, that we have some of these large tech corporations that say they have, oh, they have multiple language support for certain indigenous languages and things. But internally, they're using metrics where they can have a 50% accuracy rate. And then they'll claim that they have multilingual support. But if 50% of the communication that you're producing is inaccurate, maybe that's what's happening to all of you right now, as I'm speaking, hopefully not? Can you really claim that you are saving this indigenous language that you're preserving it? Or are you commodifying? Are you? Are you checking it off your list of saying, Oh, we're doing what we should do, we're preserving cultural heritage, we're doing the right thing here. So I think there's think we need to have consent within the communities that we say we are representing.
Matthew Allen 28:23
Consent is an extremely important thing when it comes to these sorts of things, particularly, as you know, me, as a black man, you know, having consent from other communities, when it comes to collaboration, or incorporation of certain things that are indigenous to people like me, or African American, or people that I know who are Caribbean American or, or anything of that nature, that's a huge part of it, particularly when it comes to collaboration. So gang, just ending with you on that particular point. Collaboration is a particularly important thing for you and not just for you and your, your partner, but also collaboration between yourself and code. So tell me about, you know, just in that this one last process, before I get to the last question to each of you about the importance of easing the minds of people who, when it comes to trying to get them to collaborate more with with AI, rather than to allow it to be a replacement.
Right. I probably gonna go back to do what to the to the practice that I do again. Well, first of all, the way I collaborate with with the technology right now is I use it to kind of unlearn my bias ear in the sense that as a point also as a point of inspiration as well that oh Maybe there's another way we can do this. Or there's maybe another way we can hear this. But also, I also look at, for example, about to go on some work and find out. So an example, not only collaboration between human but also maybe collaborate collaboration between human and machine, that maybe there's another way we can look at this collaboration. Maybe it has to be more interlocked. More interplay, more in inter, what's the word inter dependent? Yeah, I think that's kind of like where we are at right now. And want to see what is going to bring next.
Matthew Allen 30:52
I do have one last question. I'll ask each of you. Starting with you, Jocelyn. What guidance would you personally give to offer? Or sorry, what guidance? Would you personally offer to ensure generative AI? Is something that's ethical?
Jocelyn Miyara 31:13
That's a big question.
Speaker 6 31:15
But I think one piece of advice I would give is,
Jocelyn Miyara 31:20
you know, what problem are you trying to solve? And did what you just do actually solve that problem? So I was thinking a little bit about this Levi's ad campaign, where Levi's decided to, instead of hiring a diverse set of models, user generated AI to create images of a diverse set of models? Well, you know, that created a diverse looking ad campaign, but you missed out on hiring a bunch of diverse models and actually paying people who, you know, could probably use the money. And so I think it's important to think about the impact of the use of your, your AI as well as the output.
In terms of ethic. I think there's a lot more, especially come with the cultural heritage, I think that's there's a lot more research and resources, finding that we need to maybe working on that and thinking about as, let's say, input, just just focus on the pitch information that's already like, there's a lack of that information. And I think maybe we could start with putting more resources into finding data sets that are more to itself, and ethical in a way, I guess. Awesome.
Allison Sherrick 32:54
I think starting from the framework of AI as a tool, not as a solution is always a good place. Because I feel like there's a lot of emphasis and hype right now on like, a nice solution to all the problems and everything will be golden and magical, like, but I think there's no hand waving in certain situations. And I think we're gonna still have to keep doing our cross disciplinary work to be good stewards of cultural heritage, and the historical record. In terms of more like nuts and bolts, practical things, I think that we need to keep pressing for more transparency for big tech companies. I think we're making some very, like broad strokes, educated guesses, and they're toolings under the hood, but I think it's time for us to be like, No, really, we need to better understand where your your training materials are sourced from, if we want to keep using the outputs that these big tech companies are creating. And I think we need to have more collaboration, where we draw from, like our cross disciplinary expertise to make sure that we're not coming from just a technical perspective, or just a internally focused perspective, but that we keep trying to widen the voices that we're listening to.
Matthew Allen 34:15
Thank you very, very much. Very eloquent. Thank you. I appreciate it. And thank all of you for participating in this. Before I open it up to questions. I just want to get a quick show of hands. How many of you before coming to this event, how many of you guys, just by show of hands, how many of you were optimistic about AI the direction that's going to show up hands is the safe space. Okay, newborn what have you so how many of you here feel like even before you got here that AI is a bad idea and that it's going to lead us to bad places show of hands safe space? Okay, yeah. Someone who back is how many of you are going to watch the Terminator in the matrix much differently? Knowing you know what we know now about AI? I'd like to open it for questions if anybody has any questions, you have someone that's going to come out at you, sir, in the front.
Speaker 7 35:23
Hi. I found it really interesting that the thematic underpinning of a lot of what you're talking about, was this kind of like a pursuit of detaching the framing of AI from a more technocratic orthodoxy. Like I've been reading a lot about reframing AI as instead of a model as applied practice, or instead of just privacy, contextual privacy, I think it's fair to say that non technical communities, particularly creative ones, have more capacity by the nature of their work to introduce a more diverse terminology in this type of space. So my question is, how do you leverage a diverse lexicon when it comes to this kind of thing? In public education, as you mentioned, but also in retaliation against a lot of the risks that you were also describing in your lines of work?
Allison Sherrick 36:24
Don't all answer at once. I think there's a lot to be said about framing AI, in in a non technocratic way, because right now feels very, sometimes even like just the terminology that we're all using, it feels like it's not settled. It's like shifting ground. And we're all trying to describe it. And like what, what it really means. And I think there's a lot of room in the cultural heritage space in the library archives, museum space to define define things on our terms, in, in, in ways that are respective of the communities that we're supposed to be representing. And then also go doing education for ourselves, for our patrons, or users, researchers, and then kind of managing understanding and expectation as tools are developed and go along, rather than just sitting back and saying, Oh, this is how this is AI. And we're going to end up with like, how from 2001 Space Odyssey, which is what I've watched before it
Matthew Allen 37:30
opened up our doors. Any other questions? Yes. See you in the front?
Speaker 8 37:43
Thanks. So when I think about, I guess, basically diversity inclusion, especially when it comes to AI and language models and think inclusion into what we've already kind of talked about some of the issues around classification and how, like things have been kind of interpreted, or a lot of large language models have been trained and built through a Western lens. There's an example. It was a medium article called The myth of the American smile about how have you asked for a photo of a group of people across different cultures who may be historically wouldn't have, you know, smiled in photos, you get the same American smile across everything. So I guess the question, I want to ask them trying to frame it properly. But um, what would you want to like? What's going to be a question you ask yourself or practice you do or something you'd like to see done differently in the future to ensure I think genuine, cultural nuance and expression with AI?
Jocelyn Miyara 38:35
Yeah, I can take that one. I think one of the things that's really important, and I hope this is happening, but that in organizations or companies that are creating AI models, that there is a diversity in hiring, and that the people that are creating these models come from diverse perspectives. You've seen earlier gave a great example of teaching these students about things on the internet and talking about how the boys were perfectly comfortable with sharing everything openly. And the girls were a little bit more hesitant. And, you know, I think that's because girls from an early age have an experience of the male gaze and being protective of that and and how that experience relates with different kinds of sharing online. And so I think it's really important that tech companies really focus on getting that diverse perspective in their coding rooms.
Allison Sherrick 39:26
Thank you. I just say related to that. I think it's interesting the number of I guess you call them like aI searching for the word but like something like the algorithmic Justice League. A lot of these companies that are like aI oversight or ethical analysis groups, working groups are usually founded by diverse persons who were fired from the big tech companies for asking hard questions and being whistleblower. So I think We know it's a problem. So maybe listening to their perspective, what they're flagging as, like problematic and what they're seeing.
Matthew Allen 40:11
Thank you. Any other questions? Okay, so I'd like to just personally end on this note, just from a personal standpoint. Yeah, AI does sort of scare me, just because of the way the trajectory of progress has happened in my lifetime. One of the things that I noticed, particularly as a person that's involved in documenting the entertainment and art and cultural business, that convenience reigned supreme, almost all the time, which is makes AI very worrisome for me. You know, once again, show of hands, if any, I everybody talked about in the last panel, they talked about, there's a gentleman that talked about the fact that, you know, you still need a human being to initiate the AI to do its thing. And I keep thinking, it's only going to take one person to create generative AI that doesn't need a person to initiate it, which could, which, like I said, progress and convenience will always reign supreme. And all it takes one person to make that innovation. And then we have that, then we're in real trouble. But the important thing to understand is, number one, the educational aspect of AI. I think that conferences like this, and these sorts of conversations help us to understand that AI is ultimately should be used as an auxiliary, in my opinion, you know, an assist, rather than something that's going to take over. And just make us all lazy, for lack of a better term, which I learned in the conference yesterday that that's kind of a misnomer. So I just encourage every one of you to take something from these panel discussions and these sort of opportunities to get as much education as you can, and then pass it on to the people who you feel would best need it. Because that's ultimately going to always be the leveling the playing field is is educating people, which is a huge important part when it comes to cultural identity is an even playing field for education for all communities. Thank you guys so much very much.
The engelberg center live podcast is a production of the engelberg center on innovation Law and Policy at NYU Law is released under a Creative Commons Attribution 4.0 International license. Our theme music is by Jessica Batke and is licensed under a Creative Commons Attribution 4.0 International license