Engelberg Center Live!

Legal Implications in AI: The Case of Clearview

Episode Summary

Today we’re exploring the story of Clearview AI, a private company that has developed facial recognition software, sold it to law enforcement, and is now in the process of getting sued by a community of activists who allege that they are being illegally identified at protests. What is the role of law when it comes to AI? Which laws apply, and which are still to be written? Who gets to determine whose free speech matters most? And how will a ruling on this case affect the many current and ongoing debates about privacy, surveillance, and the exploitation of our personal data?

Episode Notes

The Blue Dot Sessions, “Copley Beat,” “Plate Glass,” “Flashing Runner,” “Fifteen Street,” “Silver Lanyard,” “Greylock,” “Cornicob,” “Nine Count,” “Tall Journey”

Episode Transcription

Tamar: Melodi Dincer is a member of the Knowing Machines legal team and a freshly minted lawyer, which means, of course, that she recently had to take and pass the bar exam. Now this would be a pretty unpleasant experience under normal conditions, but in 2020, Melody was served an even less enjoyable twist.

 

Mel: It was fully remote, meaning that I had to use a computer that I already had, and I had to download software onto that computer that had to verify that I was the person taking the bar exam that I said I was part of.

 

Tamar: That verification included Melody uploading her government issued ID that is a clear picture of her face.

 

Mel: And that was just to prove that I am who I say I am using essentially a computer's version of what I look like. So when you use those systems, you're verifying who you are. By having a piece of technology, an algorithm essentially measure the distance between your eyebrows, the size and the shape of your nose. To say that, yeah, this is Melody dancer. This is student ID number taking this bar exam.

 

Tamar: But facial recognition wasn't only verifying her identity to be able to take the exam, it did it the entire time she was taking the exam to.

 

Mel: To make sure that we weren't cheating, or we didn't have someone on the other side of the computer feeding us answers. Somehow the technology made us essentially stare at the web camera built into our laptops as much as we possibly could while reading on the screen the prompts for the exam, and also typing our very long essay answers and analyses, all in real time and ensuring that we were in a quiet enough environment that that system couldn't pick up [00:02:00] any sound around us, because that might mean there's another person in the room.

 

Tamar: So you have the stress of taking the exam, which will of course, determine all of your career prospects. And then you add the stress of remembering to look at your webcam while your brain is crunching this incredibly dense information to pass that exam. That, again, will determine all of your career prospects. And then if that isn't enough, you have to worry about the weather.

 

Mel: The day after my bar exam ended and it was a two day process. There was a massive thunder lightning storm in my area and it knocked out all power for three days. And if that had happened two days sooner, I would have not been able not only to take the exam and eventually pass it. I would have to continue doing non-attorney type work at my real full time job for at least six more months until I could retake the exam, and that very much so stressed me out. And I remember feeling so fatigued after the exam, not only because of how hard I was using my brain, but I really believe that it was just the kind of environmental strain of having to stare at my computer and having my bathroom breaks regimented so strictly.

 

Tamar: And did all of this have an impact on your experience of the exam?

 

Mel: It absolutely impacted my experience of the exam.

 

Tamar: From the Engelberg Center on Innovation, Law and Policy at NYU School of Law and USC's Annenberg School for Communication and Journalism. This is Knowing Machines, a podcast and research project [00:04:00] about how we train AI systems to interpret the world. Supported by the Alfred P Sloan Foundation. I'm your host, Tamar Avishai, and today we're talking about the legal implications of AI. Whether it's how a student like Mellody takes the bar exam or the focus of our episode, the story of Clearview AI, a technology company at the heart of many of the current debates over privacy, surveillance and the exploitation of personal data.

 

People have to be fed up and say, enough is enough. Enough is.

 

Tamar: Our story starts with a lawsuit in 2021. An organization called Just Futures Law, a group of lawyers who identify as movement lawyers, came together to represent a group of protesters and activists in California, alleging that the.

 

Mel: Local police departments in those areas are able to essentially chill their speech and their their activism because they use a facial recognition service that's developed and sold by this company, Clearview AI Inc..

 

Tamar: The protesters are suing Clearview. They claim that police departments have used this facial recognition tool to not only identify and target activists at Black Lives Matters protests, but to track them from protest to protest. And this, of course, makes them reconsider protesting at all.

 

Mel: Even just the knowledge that this system exists, that it can identify specific people who are expressing their their political rights, that that is in and of itself enough to discourage protesters from going [00:06:00] out in the streets and in taking part in these actions.

 

Tamar: So they filed the lawsuit alleging that Clearview has violated these protesters rights under the California Constitution and also under additional consumer protection and privacy laws. This technology, they claim, opens them up to identification and retaliation from law enforcement, from Ice, from local police departments.

 

Mel: And that's really where their concern comes from for the use of this facial recognition surveillance.

 

Jason: Without permission. Clearview had trained its system on the faces of many of these activists.

 

Tamar: This is Jason Schultz, a professor of clinical law at NYU and a team lead for the Knowing Machines project.

 

Jason: And they said, wait a minute, it's my face. You can't use it against me, essentially. That's that's sort of the heart of the claim.

 

Tamar: This is an unusual legal claim. It falls under what's called the right of publicity, which was part of the allegation that these activists brought. The right of publicity basically states that someone has control over how their image can be used for any commercial purpose. It's an old law with fascinating roots in the late 19th and early 20th century, when portrait photography was newly in vogue. But before there was an established modeling industry, advertisers would just go to portrait studios and buy the negatives to use in their ads. So a normal person could just walk down the street and see their face on a huge advertisement for pantyhose, toilet paper, cooking products, you name it all without that person's permission. You can imagine how weird, even disturbing that would be. So they sued.

 

Mel: And the first lawsuits that were brought in the 1900s were brought typically by women who were housewives, working women in factories who were otherwise relatively [00:08:00] ordinary, except that their face or their likeness was taken by a company and used in a way that's making that company sell products and make money.

 

Tamar: This commercial appropriation of someone's identity, which the right of publicity is meant to protect, goes straight to the heart of the story here. Clearview and the protesters, and the legal and moral issues that arise when we think about the role of facial recognition and surveillance in general. These are tools that are fed by datasets which themselves are fed by the images that we upload onto Facebook and Instagram and TikTok every day, often without even batting an eyelash. Private companies are using datasets to train systems to recognize our faces, and we have no say in what they do with that information.

 

Jason: What I think most people don't realize is how vulnerable they are to their information being taken and used in ways that they would never consent to. The core question here from in many ways, is how do we hold other companies accountable who scrape websites where you might have agreed to that website, but you didn't agree to let anyone use it on the internet for any purpose?

 

Tamar: Yeah. I mean, what if you've only agreed to let a particular website like Instagram have your image? Who is to stop the rest of the internet from taking it? How would they justify it and how could you legally stop it? And what makes all of this particularly interesting and thorny is the relatively new practice of AI companies scraping billions of faces from the internet to build their systems. Add on top of that controversial new products like Clearview's facial recognition app, and [00:10:00] you have a whole new set of legal issues for the courts to decide. So how do we reconcile these brand new issues with already established legal precedents? The right of publicity, as we've said, has been around since the turn of the 20th century. Protecting our images and identities against commercial exploitation by new technologies, from photography to television to video games. Clearview's actions would seem to fall under this law, but they have a defense, too. And it's not some new futuristic argument about the superiority of AI. Instead, they're also evoking an old argument from a very old amendment: free speech.

 

 

A principle that supports the freedom of an individual or a community to articulate their opinions and ideas without fear of retaliation, censorship, or legal sanction.  You know the one.  But before we get into how Clearview is using free speech as a defense for what they’re doing, let’s quickly explain… what they’re actually doing.  From a technical perspective.

 

The process itself, how they take the personal images that we upload, turn it into data, crunch it into faceless numbers, and then turn that data back into identifiable images. Again, it feels like something out of Minority Report. Here's melody again.

 

Mel: Measuring the space between your eyebrows, how the shape of your eyes, your nose, your lips, and all of the features of your face to create essentially what they call a facial vector. It's basically a little, you could say faceprint that's [00:12:00] unique to you, or at least in that image of you. And then the algorithm itself is trained on the vectors of people's faces who are similar, right? They share similar aspects of their facial features. And that's essentially what speeds up the process through training on these billions of images that are clustered in this way, that the algorithm can work through very efficiently. And that's really how facial recognition works at its core.

 

Tamar: We deal with facial recognition every day. I mean, how many times do you unlock your iPhone just by looking at it? And it's a convenience, not a concern. But a big difference here is that Apple is only using the information that it's gathered to identify your face for your device, with your explicit permission. It's not selling a commercial product to government agencies trained on your face, and used to compare your face to any photo or video a user uploads. But okay, back to Clearview. Once they created these faceprints, which are essentially our online fingerprints, they put them back into their own database and then licensed their end product and app to law enforcement. Officers upload photos onto this app and then run them through Clearview's database to see if there are matches. So in other words, Clearview AI is selling an app that is built on your face on our data without our faces to train it. There is no app.

 

 

And Clearview is arguing that this entire process is protected by the First Amendment.

 

Jason: I think in some sense, what Clearview is trying to do is to protect its business model.

 

Tamar: Jason Schultz again.

 

Jason: That's what they're trying to do here. They're trying to say our business model is protected by the First Amendment because it's on the internet. The internet has speech on it. We transmit information that looks like speech. Therefore what we buy and sell with police officers and law enforcement in terms of this app and this service is going to be protected there.

 

Tamar: So let's dive a little deeper into the First Amendment as the public generally understands it. The First Amendment provides protection for pure speech. The things we say right or otherwise express. But it doesn't generally protect conduct, except for every once in a while when conduct can be expressive to like, say, burning a flag in protest. And in those kinds of cases, courts have to balance protecting expression with holding actors accountable for conduct that may cause harm. And like with right of publicity, we think of the First Amendment as existing to protect people, giving everyone an equal voice no matter how much power they wield. So, at least to me, outside the legal system, it would seem that Clearview's defense is a long shot. But if they win their case by claiming First Amendment protections, the repercussions go far beyond this one company, this one lawsuit.

 

Jason: This could really let a lot of companies do whatever they want without any constraint. And so I think, I think the the implication of this case is that we want to make sure that the court gets it right, because if the court ends up getting it wrong, it could really allow a lot of companies to do a lot of harm with our images and our data without any accountability whatsoever. That's [00:16:00] the concern, right? It has to be the ability to have laws to constrain companies that want to use our images and our data, especially when they use it against our interests, as Clearview is here for the plaintiffs.

 

Tamar: So those are our players and broadly their positions. But there are always more tactical tricks up sleeves. And this is where I beg your pardon. We need to talk about Slapps.

 

Jake: Slapp stands for Strategic lawsuit against Public Participation.

 

 

Tamar: This is Jake Karr, another member of the Knowing Machines legal team.

 

 

Jake: And that title, that acronym really says it all. It's about protecting the little guys from lawsuits that are clearly meant to stifle their speech or petition or assembly rights.

 

Tamar: Or at least it should be, because SLAPP lawsuits are notoriously bogus. They're brought by individuals or entities ostensibly to dissuade defamation, but they're really trying to shut them up to silence their free speech. The most famous example of a SLAPP suit was one that was brought against Oprah by cattle ranchers in 1996, at the height of mad cow disease. After she said she'd never eat another hamburger. They sued her for more than 10 million in damages, claiming that Oprah had committed libel. And I'll end the suspense here. She won.

 

Oprah: Um, my reaction is that free speech not only lives, it rocks.

 

Tamar: But most people who fight these kinds of lawsuits aren't Oprah. Most of us would get buried in paperwork and bankrupted by legal fees. So what does the average Joe on the street bring to a slap fight? Anti-slapp laws.

 

Jake: The anti-SLAPP [00:18:00] laws are meant to provide David with a shield against Goliath, and there are now statutes like this across the country to protect defendants on the receiving end of what are essentially meritless lawsuits that are attempts to bully speakers against speaking out.

 

Tamar: Okay, so why are we talking about SLAPP laws and anti-SLAPP laws and Oprah? Because we need to explain what Clearview then did next. We already know that they're making a free speech issue, claiming that their exploitation of the protesters faces to sell an app against the protesters was an act in furtherance of their free speech. The free speech of Clearview. You know, the secretive tech company valued at more than 100 million against the protesters. You know, the ones exercising their free speech. We've already talked about this ridiculous legal pretzel. But then Clearview invoked California's anti-SLAPP statutes with these protections. If Clearview got their way, they would basically dismiss the protesters as cleanly as Oprah dismissed the cattle farmers. Clearview would invoke the same anti-SLAPP protections, which, again, are meant to protect the little guys, claiming that the lawsuit brought by the protesters is so bogus that they could actually be denied their day in court in most cases.

 

Jake: Defendants are David, and they use anti-SLAPP motion to protect themselves from the financial and emotional costs of having to defend frivolous, meritless litigation. Here, though, Clearview is the defendant. The defendant is Goliath, and they're trying to use anti-SLAPP essentially not as a shield to protect them, but as a weapon against the the Davids who are who have brought this lawsuit against Clearview [00:20:00] both to, to to protect and vindicate their own privacy rights, but also to protect their own rights to expression and demonstration and protest, which they feel are being chilled by clearview's licensing of their software to to law enforcement.

 

Tamar: This is really unsettling. The idea that laws that have served to protect protesters from giant corporations who seek to silence them are being actively exploited by those very corporations to exempt their products from legal regulation and overall accountability. This should concern all of us. And it concerned Jason and his team. And this is where they enter the story in 2022. They filed an amicus brief, a legal document that basically translates as friend of the court, written by people outside the case, but who have expertise to share. That might help inform the court's decision. The Knowing Machines project, of course, knows AI and especially the data used to train it. And this brief written by Melody and Jason, with help from several NYU law students, was persuasive enough to bring together over 20 leading scholars in the AI field to sign on. The brief focused chiefly on right of publicity. Again, that early 20th century precedent that protects people against the non-consensual commercial use of their images. But there's a possibility that the court may not agree with them on the merits of the law itself. So they're expanding their advocacy by highlighting biometric surveillance, the ways that training sets are chock full of our biometric data, which are the ways our faces are shaped and spliced and captured, and how they change as we move and as we feel things, [00:22:00] the way emotions contort our expressions.

 

Mel: So we wanted to focus our comments on a specific type of biometric surveillance that hasn't gotten as much attention, but we see as essentially the same in terms of its impact on society and its potential dangers. And that's emotion recognition.

 

Tamar: Like with Clearview's facial recognition systems, emotional recognition systems are based on huge training data sets that basically teach an algorithm to read someone's face and identify their emotions, upturned mouths and tinted eyebrows as a means of determining happy or sad. Emotion recognition tools are often sold as add ons as bonus features of facial recognition software. Now, to be clear, Clearview does not offer emotion recognition software as part of their app. But the Knowing Machines team wanted to focus attention on the fact that we don't have a lot of autonomy over how our data ends up in these training sets that then enable companies like Clearview to train their facial recognition algorithms. So back to the team. Their amicus brief focused on right of publicity. And then in September of 2022, the Federal Trade Commission put out a call for experts to comment on the different facets of the way that companies commodify our data and these surveillance practices. So that seemed like an ideal time to respond with a focus on emotion recognition and how totally scientifically unfounded it is.

 

Mel: In our comment, we really focus on the fact that there are a lot of underlying fundamental questions that have been unanswered. So what is an emotion? How are emotions even expressed? Are we ever neutral or emotionless? And how do we as humans [00:24:00] try to interpret or infer our emotions, either our own or those of other people?

 

Tamar: But even though these questions remain unanswered because they're really, really hard, if not impossible to conclusively answer, companies are still going ahead and developing and marketing emotional recognition. The market for it is booming and it will be used on mental health patients, job candidates, jurors in a courtroom as they're responding to opening and closing arguments. Law students like Melody taking the bar exam. Oh, and you know those border agents at the airport that have now been replaced by machines? Those machines aren't just taking your picture. They're potentially analyzing your face to determine why you traveled here. Many of us imagine that something trained by a computer is and should be neutral. It seems like the only silver lining to this brave new world, that reducing our images to binary zeros and ones will at least eliminate bias. A computer doesn't see class or race or secretly think xenophobic thoughts. But of course, as we will find throughout the Knowing Machines project, that's not the case. Humans trained these machines. Of course, there's bias baked into these systems. And it's not just that taking an exam with the pressure of maintaining eye contact with a webcam could very well affect your score. Poorer people with lousy webcams or spotty Wi-Fi are at a unique disadvantage. The bulk of this training data often comes from Caucasian faces, which means that a black candidate staring into a camera might have to smile bigger and longer to achieve the same results. Which, [00:26:00] as many of them might say, they would need to do anyway in person when interviewed by a human. And this is why the legal is personal too.

 

Mel: Yeah, I find this really this whole topic compelling personally, because especially I'm Turkish and I grew up going back and forth to the country at a time where the internet was really heavily censored, and it still is today. But when I was in high school, that was around the first time that that the government in Turkey was really cracking down on protesters using the internet. And I think that this case, this Clearview case in particular, and more generally, the way that facial recognition and emotion recognition systems can be used to specifically identify and target us based on who we are and how we look and where we go and what we do. And what we believe in is something that I don't think many people in the US are aware that it can impact them, because we do really hold strong to the notion that we have First Amendment rights that protect us in these specific ways. And this case is really interesting in that it flips that assumption on its head, because here you have a group of people who are already claiming that if you were active in, for example, the 2020 Black Lives Matter protests following the murder of George Floyd. I live in DC, and if I was at those protests in DC, then several federal law enforcement agencies like the FBI, the Park Police and even the US Postal Inspection Service, which is actually the oldest law enforcement agency in the country. They used Clearview specifically to identify people suspected of damaging Postal Service property during the protests, [00:28:00] or any type of broadly defined crime that police believed may have occurred. They were able to access this incredibly powerful system and identify individual people, and that's something that I personally have seen in Turkey and around the Middle East. But it's it's jarring to see how quickly and easily it has been integrated into the US law enforcement ecosystem.

 

Tamar: The case is still pending. We don't know what's going to happen with Clearview and the protesters. Even if we can try to predict the knock on effects of either outcome for companies, for free speech, for us as individuals, and that actually got me thinking about my own role here. I upload photos to Instagram of me, my kids, my life. I'm feeding the machine. I'm training the datasets. One icing smeared birthday party photo at a time. I could be contributing today to my son being identified and arrested in the future. Should he exercise his legal First Amendment right to assemble and protest what he believes to be unjust? Because a giant company is using those same rights to silence him. I asked Jason if I should feel as bad about this as I was starting to.

 

Jason: Yeah, there are ways, you know, things we can do as individuals, but society and government regulation is there to take care of us collectively, right? To prevent false and deceptive advertising of products and unsafe products and unfair business practices, and to expect each of us individually to micromanage all of our activities online is just incredibly stressful, and it's such a burden and unfair, I think, in itself. So I don't think [00:30:00] it should. The burden should be placed on us as individuals. I mean, obviously we should do whatever we feel we want to to try to minimize the exposure we have. But I think ultimately it's on the laws and the government and the court system to enforce our rights and to regulate in a way where we don't have to be stressed about thinking about this all the time. And we don't have our own data and our own images used against us.

 

Tamar: Next time on knowing machines. What do we know about the actual creators of datasets? What kinds of micro and macro constraints are they under?  And what kinds of questions do they have to think about, and grapple with, before they unleash these unwieldy, unstable repositories out into the world?  We talk to actual dataset creators - it turns out, they’re just like us - next time.  We hope you join us.