Engelberg Center Live!

Book Event: Move Slow and Upgrade

Episode Summary

This episode is audio from the book launch for Move Slow and Upgrade, featuring co-author Albert Fox Cahn in conversation with Washington Post technology reporter Shira Ovide. It was recorded on March 4, 2026.

Episode Transcription

Announcer  0:00  

Speaker, welcome to Engelberg Center Live!, a collection of audio from events held by the engelberg center on innovation Law and Policy at NYU Law. This episode is audio from the book launch for move slow and upgrade, featuring co author Albert Foxconn in conversation with Washington Post technology reporter Shira Ovide. It was recorded on March 4, 2026

 

Katherine Strandburg  0:25  

Welcome everyone. It's great to have you all here, and I'm sorry we're starting a little bit late, but we didn't anticipate the complicated puzzle we would put you through to find this room. But you're all not here, yay. So I'm Kathy Strandburg. I'm the faculty director of the Information Law Institute, which is one of the institutions sponsoring this event. I'm also a faculty director of the echo Center, which is another of the institutions sponsoring this event. I have no institutional connections in stock whatsoever. They are also co sponsoring this event, except that my dear friend Albert started stock. So that's my connection. Okay, so I'm going to spend like two minutes introducing these people, because you don't want to hear from me. You want to hear from them. So Alfred farston, practitioner in residence, currently at the center of Governance and Human Rights and visiting scholar at Pembroke College. Pembroke College. Where the heck is that? In Cambridge. He's also the founder in residence of the surveillance technology oversight project, and I'm so glad he has this in bio, fellow at the NYU Law School's Information Law Institute. His most important, many, many other you know, honors, affiliations, etc, but maybe we'll just stop with that, because you all want to hear about the book and then with him to have a conversation about the book. Tonight is Shira Ovide, a technology reporter covering how artificial intelligence affects the economy, jobs and American institutions. Before being at the Washington Post, she was writing the on tech newsletter for the New York Times. She's by the columnist at Bloomberg opinion,

 

Katherine Strandburg  2:12  

et cetera, et cetera, et cetera.

 

Katherine Strandburg  2:16  

I promise you that if you read the Washington Post and you read about AI, you have read her work. Obviously, that's true.

 

Katherine Strandburg  2:23  

Okay, so that's that's all I have to say. We're

 

Katherine Strandburg  2:25  

turning it over to them for the good stuff.

 

Shira Ovide  2:27  

Thank you. Thank you. Thank you for that. And I think probably at least half of this room already knows this, but you were saying that you grew up basically in this neighborhood and protesting what in Washington Square Park up the road?

 

Albert Fox Cahn  2:43  

Yeah, no, I was in into protesting the NYPD before it was cool. I grew up in meat packing, and I remember coming to this building to do a Know Your Rights workshop for legal observers. As a you know, high school student who was getting ready to protest the Iraq War. And, yeah, and I, my first public interest legal work was working with Claudia Angeles. Is here, do I think reviewing deposition transcripts for her law school clinic, which, I don't know how you let me near those as like a 16 year old, but like, yeah, I have been very, very lucky to be in the NYU orbit for many, many years now.

 

Shira Ovide  3:30  

Okay, so I want to start talking about the part of your book that goes back to the early stages of the covid pandemic, because I think that nicely illustrates what you call upgrades, or this sort of incremental innovation and its opposite. So tell me about exposure notification apps, which I confess that I basically memory hold from the early wave of covid. So remind us of that of those apps and what your proposed alternatives were,

 

Albert Fox Cahn  4:03  

yeah, there's nothing like bringing a group of people together and starting off with trauma inducing rabbits. Sorry, of covid, but no look during covid, there are all of these seemingly impossible choices about, do you open up society? Do you keep people isolated? Do you focus on, you know, you know, Protecting Access to education and you know, having people doing work, or do you promote this sort of public health response? And there are a lot of different, you know, really somber options that people could use. The problem was at the same time, you had a lot of surveillance solutionism coming into the into the debate and offering this magical thinking. And one of the worst examples of this was exposure notification app. So during covid, people were being told, rightfully, you wanted to maintain. Six feet of distance in order to prevent yourself from being infected. And so what big tech tried to do in response to this is say, Oh, it's okay. We can create an app for this. We can create an app to actually figure out who has been exposed to covid, which at the time was a really valuable thing, if you could do it, because there was such a shortage of covid 19 tests. I mean, I remember ordering a covid 19 test around the same time that Andrew Cuomo released his book on covid leadership, and I got the book a week before I got the test. It was just that hard to get access to this fundamental question of, have I been infected? And so getting that exposure information would be great. The problem is, our cell phones aren't actually epidemiologically sound ways of figuring out who's been exposed to a virus.

 

Shira Ovide  5:53  

And it was really framed as this is, you know, a lot of it was done this heralded collaboration between Apple and Google to sort of bake into nearly every smartphone made in the United States right bake in this kind of privacy preserving exposure notification technology that would basically tell you okay, you were three days ago within six feet of this person who had later tested positive for covid, you should get tested kind of thing, right? It was really heralded as this technological breakthrough and unique collaboration,

 

Albert Fox Cahn  6:31  

but it doesn't work. The problem is that, yes, Apple and Google were able to come up with this way to reduce the privacy harm of collecting all of this location data, but they never actually found a way to convert that into an accurate prediction of who had been exposed. Part of this is that, well, there's a huge difference between being six feet away from someone out in the open or being six feet away from people with a brick wall in between. There's a big difference between being six feet away when you're wearing a mask and other PPE or six feet away from someone who is like actively breathing into the same hookah pipe as you there are so many variables, contextual data points that our phones can't capture, and these are the questions that Disease Detectives have been trained to ask people for years in identifying potential exposure points in an outbreak, but that's stuff that you know, our phones couldn't do, and so this innovation that we had been sold as a way to scale up our capacity to figure out who was at risk a multi million dollar innovation at that one which New York state spent more than $20 million on. It turned out at the end of the day, you know, it was just fundamentally incapable of telling people who was at risk and who actually needed medical treatment. And to me, it was really frustrating, because it was just another example of how when we have an unproven innovation being sold to us as the way to escape hard choices, we lose the opportunity to have an actual conversation about, well, what do we want the priority to be? How do we want to structure our public health response and what should be the you know, how do you have grown ups in the room who are focusing on reality and not just a tech company coming in and saying, Don't worry, you can have your pandemic cake and eat it too, and not worry about who was eating the cake first.

 

Shira Ovide  8:37  

So what is, what would have been the ideal alternative right in, in your, in your upgrader thesis, right? If the classic trap is, let's look for a tech silver bullet that will fix this hard problem, magically fix this hard problem. What's the upgrader mindset alternative?

 

Albert Fox Cahn  9:01  

So upgraders, and my co author, Evan and Sellinger and I wanted to look at the people who don't fall into these sort of innovation traps and focus instead on evidence based upgrades. And one of the things we saw is it's going for the option where you have a proven return on your investment, where you are working with what we already know works, where you understand what the costs and the trade offs are. And a good example of that came from Massachusetts. They didn't go with IBM to create a multi million dollar app collaboration. They just invested in having human beings reaching out to people who had been infected with covid to actually map which of their neighbors, which of their friends, which of their family members potentially had been put at risk. And that sort of analog method that sort of, you know, just invade, you know, scaling of investment, that upgrade. That went so much further in actually being able to improve public health outcomes. And I should note that, you know, obviously I have a particular focus in my work on creepy surveillance and all the unintended privacy harms of these sorts of solutions, and with all the exposure notification apps we are promised not only that it would be effective, but it would be private, it would be secure. And you know what we found is that actually, despite all these promises, our data would actually leak in in in the end, and they weren't capable of even doing that. One thing they promised us up front.

 

Shira Ovide  10:40  

What do you think it is that seems so appealing about you know, the promise of this tech will fix it right, even though, as you point out in multiple examples, in your in your book, your book with them, that this is a repeated pattern that you can sort of sometimes see from mile coming of looking for these quick fixes with technology. What is it that's so appealing about that approach?

 

Albert Fox Cahn  11:05  

I mean, it's, it's the mindset where someone spends $1 on a lottery ticket instead of putting it in the bank. Putting $1 in the bank is pouring. You know what the interest is going to be, you know How little you're going to get out of there. But that lottery ticket, well, you know, maybe you know, and that sort of lottery ticket mentality is what we see in a lot of government procurement, where we see, you know, agencies investing in vendors not because they know that the technology actually works, but because they hope, just maybe this will be the silver bullet that gets them out of a seemingly intolerable situation.

 

Shira Ovide  11:43  

I mean, I want to push you because part of the difficulty is that the the upgrade or alternative is also expensive and difficult and flawed. Right hiring a lot of contact tracers to identify people who might have been exposed to covid, or spending billions of dollars to upgrade testing or math availability of masks, that's hard and expensive as well, right? So, you know, it's not, it's not like you have so talk to me about that.

 

Albert Fox Cahn  12:12  

Well, you know what you get with the upgrade? It's written on the box. You get a sense of, okay, if we invest, you know, 10 million, $10 million in this response, what will be the benefit? What will be the harms? Why? Why is this going to be useful? But when we keep chasing these, you know, sort of moon shot innovations that end up crashing on the launch pad, we end up spending more and more money on things that don't take us anywhere. And I think of so many places where we've been so focused on the innovations that we we kind of got stuck in an intolerable status quo. And I think about things like the state of air traffic control in the US, where we have flights every day being directed by systems running an antiquated operating system that was considered obsolete before I was even born. And you have all of these technologies that really are failing to keep up with the pace of what we need when we're traveling, and yet we haven't actually addressed it, because people keep thinking, Well, what is the wholesale change to the system that's going to get us to a better place where we don't need to worry about this tech at all. And those sorts of replacements for the air traffic control system, none of them have actually proven safe to fly.

 

Shira Ovide  13:35  

So we basically, I mean, I don't think this was happening, but at some point it was proposed to let Elon Musk's Starlink system basically handle some of the technology of the air traffic control system, right? Yeah.

 

Albert Fox Cahn  13:47  

No. If we had like a boogeyman who we could bring to life as part of this book, it would be Elon Musk. The book cover should basically be like Elon Musk's face with a Giant sign saying no, because, like, we wrote this book before Doge, before Doge even existed, before anyone had come up with that convoluted, nonsensical acronym. And then what did we see as this was going to press someone coming in and say, Oh, don't worry if you just give me all of the country's data and just give me free reign to do whatever I want with it. Well, guess what? I will solve everything. And what have we seen? Billions of dollars in costs, and nothing actually delivered, and everyone's information seemingly put at risk.

 

Shira Ovide  14:35  

And you use the term in the book multiple times that sort of don't let the perfect be the enemy of the good, right? So I guess in your air traffic compares, or your air traffic example, you know, fix, fix the system that exists today, right? And don't hold out hope for some sort of silver bullet for Elon Musk or some other person with magic

 

Albert Fox Cahn  14:55  

beans, exactly. And I think, like another example that I. Is a technology that almost everyone will have encountered is home security, right? We all have some home security system in our lives. We've been told that the way that needs to be deployed is having a ring camera, which is connected to a dubious online platform which is sharing that information with ice and with the police department, but the evidence based upgrader approach that we see as being the alternative. It's having better locks. It's having better bars on the windows if you need them. It's having automatic light systems if you're worried about people knowing you're gone on vacation. And and, you know, what we've seen is there's a huge race to invest in things like ring and flock and all of these incredibly Orwellian, you know, surveillance companies, and they're telling us, well, this is the thing that will keep you safe. But when, when we look at the data, we see a lot of evidence that they have been a great investment for the people who got in early on the company, and little evidence that they've been a good investment for the people who actually installed in their home.

 

Shira Ovide  16:08  

Yeah, those technologies are appealing, right? Because they're something like ring is pitched as a convenience for the homeowner, right, that you can make sure that you're package arrived on time and didn't get stolen by the neighbor or the kid down the street. You can now with facial recognition. You can tell in your I knew this was gonna drive you crazy.

 

Albert Fox Cahn  16:31  

Everyone can see, like my veins slowly twitching at the utterance, right?

 

Shira Ovide  16:35  

So that you can you can see, oh, that's my friend Bill at the door, or something like that. So tell me why that makes your blood vessels explode?

 

Albert Fox Cahn  16:43  

Well, we've seen CCTV in various forms, being sold as a way to protect the public for decades. Now, London was the first municipality to invest in wide scale CCTV deployment. This was something called the ring of steel. It was created initially as a way to stop Ira bombs back in the 80s and 90s, and then the mission shifted over time. It was then a way to prevent everything from other terrorist movements to knife crimes to preventing all sorts of other attacks. But when we look at the evidence about what happens when you deploy CCTV? It can be useful at times as a way to investigate harm after the fact, but it's a terrible way to actually deter violence, and we see this every day with high profile killings and crimes that happen in places like New York, where you know, people can be killed in the middle of midtown and and then the NYPD will say it's a win, because they had an image of the attack. But we were never told that this is a way to memorialize violence. We were told this was a way to prevent it. And so the the upgraders are instead looking at, well, what are the tools that actually give us this, the safety we crave. But you're right. They don't have that halo effect of Magical Thinking.

 

Shira Ovide  18:10  

They're boring. They are so boring. I mean, better locks is boring. Better locks

 

Albert Fox Cahn  18:15  

is incredibly boring and incredibly powerful, like I started, I I probably shouldn't admit this, so the statue of limitations, luckily, has gone. I started learning how to do lock picking when I was in high school. Yeah, I was a nerd. I was into hacking and like, I was like, Well, everyone says you need to learn how to lock pick if you want to learn how to hack a computer. And it is shockingly easy to actually get into most locks. Not that I ever did that. And so when you see what $100 can do when you buy a better lock versus, you know, you know, buy one of these camera systems, it actually can, you know, stop someone who's trying to, you know, do harm. But at the end of the day, you know, it's not that force field that we are. We are advertised on TB.

 

Shira Ovide  19:11  

Well, let me go back again to the covid chapters in your book, because I think the pushback and you acknowledge, right, that the mRNA vaccines for covid Were an example of a moonshot technology. Bet that paid off in a huge way. So doesn't that disprove your thesis that you know the incremental innovation is the path to success.

 

Albert Fox Cahn  19:34  

You're right, and I will retract the book. No, think about the mRNA vaccine like they are the, probably one of the most important life saving technologies developed in my lifetime. But they weren't developed all at once. They weren't developed all during the covid 19 pandemic. You saw years of iterative improvements and upgrades to these vaccine platforms, a way to. Develop the vaccine capability. And yes, you never saw a vaccine come to the market before covid 19, but there have been efforts to develop it for decades. And the real innovation, the real change during the pandemic, was the fact that we're willing to invest in rapid deployment of vaccines that hadn't yet been approved. And that is a classic upgrade. It was incredibly expensive. We knew what the upside potential was. There was a clear downside risk that none of the vaccines were investing in would be approved, but we decided collectively that was worth taking that chance to accelerate the deployment of that of that medication, and I'm really glad we did. But I think that because our public narrative around how science changes, about how technology changes, about how we fix things, is so focused on these eureka moments that so often don't actually exist, that it really skews our understanding of what we need to do to make things better.

 

Shira Ovide  21:09  

I mean, it's interesting that. It strikes me that part of the issue is that we have heroes of innovation, right? I mean, there is Elon Musk and Mark Zuckerberg and Sergey Brin and probably some women in there too. And I we don't have that. We don't have heroes. We don't have a Hall of Fame for upgraders, right? So who would you nominate? Who should be in the upgrade Hall of Fame? Yeah.

 

Albert Fox Cahn  21:34  

And I can't believe you just said hero with most of those people with a straight face, impressive. Bad choice. I mean, I think about the cyber security staff who are in back offices of companies all around the country, literally upgrading systems day and night, looking for where there's a known exploit, making sure that they've done all the things to respond to known vulnerabilities, and it is a narrative that no company is going to put out in their press release, no one's going to do a TV show about it. But that's been the functioning you know? That's been the reason we have a functioning internet. That's the reason why we trust our smartphones enough to bank on them to use it to, you know, travel abroad, to use it in, in, you know, for things every day, because we've created a platform that's reliable enough, as frustrating as they can be, that we actually trust them to get it right more often than not.

 

Shira Ovide  22:41  

Okay, again, boring, yes. Okay, so I'm gonna, I'm gonna shift to a lightning round, or a lightning ish round, where I'm gonna describe a technology, and then you tell me why it's sort of techno solutionism, or this kind of big bang approach to technology, and what

 

Albert Fox Cahn  22:56  

your upgrader alternative is rapid curmudgeonliness on the fly. So the store

 

Shira Ovide  23:01  

shelves at places like Target and Dwayne reed that have everything locked up behind plexiglass and that everybody hates, right? And it's presented as this is how we deter theft. Well, what, what is? What is your upgrade or alternative to that,

 

Albert Fox Cahn  23:17  

having more staff, having how boring, but like also, this is a classic example of responding to a problem that didn't exist. Because the reason why a lot of these companies installed all of these plexiglass protectors was in response to data that was later discredited about this surge in shoplifting that never actually happened. And so you see people responding to threats that don't exist with solutions that don't help, and ignore the actual things that we can do

 

Shira Ovide  23:48  

instead, right? So they took people out of stores to save money, right? And then that made fewer eyes on the store. And then they were like, well, we have a perceived theft problem. Let's lock up everything behind

 

Albert Fox Cahn  24:00  

and then lock doors. Why are we saying in our quarterly reports now that we're seeing a massive reduction in people's willingness to buy things in our

 

Shira Ovide  24:08  

stores, I have definitely walked away from those stores without buying anything because they've been locked up. Okay? Next the Tesla door handles, and just for folks who don't know about the Tesla door handles, there's been, you know, great reporting from my colleague from Bloomberg News about people essentially getting trapped in emergency situations, right? Because they're those door handles are flush with the door panel and they're electrically powered, and so if the car loses power, if the door handle loses power, the person inside and outside the car can't get in, so again, Elon Musk, we've now said his name way too many times. But what is your upgrade or alternative to the Tesla door handles?

 

Albert Fox Cahn  24:51  

Sometimes you don't need an upgrade, like

 

Shira Ovide  24:55  

you're saying door handles. We

 

Albert Fox Cahn  24:57  

have had cutting edge doors. Door Handle technology for decades, we but it's not just the door handles. Like think about it. You're driving down the road and suddenly your windshield is fogging up and it's, you know, it's night, it's a windy road, it's poor visibility. Do you want to rather go through three menu screens on a touch pad before you get to the button that maybe turns on your defroster, or do you just want to turn a knob like, I think this idea that everything needs to be high tech to be better misses what we actually crave to make our user experience like, not in most cases, not, you know, annoying, but in some cases, not life threatening.

 

Shira Ovide  25:46  

Do people look at you like you're a lunatic when you say, instead of buying this facial recognition powered camera, get a better door lock or, you know, a package depot where you can lock this up. Or how about regular door handles and regular buttons and cards instead of touch screens? Like, what's the response when you say this to people who are used to, well, this is the cool tech thing. This seems useful. This doorbell camera,

 

Albert Fox Cahn  26:16  

yeah, no. I think it depends on the audience. I think if I'm talking to most people, they're like, yeah, that really is frustrating. That's awful. If I'm talking to people in Silicon Valley, they're like, but what will we invest in? And if it was Eric Adams, when he was mayor, he was like, no, no, no, this is too cool. Like he like for him, like with we saw in Mayor Adams this exemplar of what it's like when someone is so obsessed with innovation that they lose any sight of what they're trying to do to solve, what problem they're trying to solve, and how they're trying to make things better.

 

Shira Ovide  26:50  

Okay, I'm gonna go back to my lightning round, because it is 2026, and therefore I am obligated to mention AI chatbots. So where do you think AI fits in your spectrum of the sort of incremental innovations, upgrader mindset, and then the opposite

 

Albert Fox Cahn  27:08  

look, I think like I am definitely. I feel like a curmudgeonly grandpa saying, Get off my lawn when I talk about chatbots. But I do think that there's been this huge emphasis on AI is going to change everything. Ai needs to be a part of everything. AI is going to be the future of everything. And I see a lot of mediocre products that are integrating chatbots and just frustrating people who are like, Why does this need to be here? I think there are incredible advances in machine learning. But again, we've had machine learning for decades. We've had linear algebra for decades longer. We've had the fundamental math at the core of artificial intelligence for since long before I was around. And yet, we've seen this rush to focus on this one form of AI as a solution to everything, and ignoring the much more complicated and I think much more interesting upgrades that allow us to do protein folding more effectively, that allow us to increase the sort of personalized medicine that we see doctors using that are really building on existing big data practices, but with a bit more power.

 

Shira Ovide  28:27  

Yeah, I mean, a couple of the examples you talk about in the book, right? That you think, Oh, actually, large language models can be helpful here, including for physicians, right? Who the useful thing about chat bots is they can sort of simulate perfect empathy, textbook empathy, in a way that a physician may not be able to right? So talk a little bit about that.

 

Albert Fox Cahn  28:46  

I think this may show my sense of the empathy of some of the doctors I've dealt with. But, yeah, I think that there are times when doctors are moving so quickly and have so many patients that they don't have the bandwidth to actually give a thoughtful response when they have a question from a patient or a patient's family. And so that is an example where llms can be useful in taking this very curt, you know, rapidly texted answer and turn it into a much more empathetic, thoughtful, rounded answer, and to also, you know, be a check on when people are not being empathetic in their communications. I think, yeah, I I personally think that, you know, it's rare when you have a situation where AI can actually improve the quality of writing. You know, the overstressed doctor is one, but I think it's much more common where, you know it can do this analysis and say, like, Hey, are you sure you don't want to take more time on that? Okay, I'm gonna

 

Shira Ovide  29:46  

ask one more question, and then I'm gonna open this up for folks who want to ask their own questions. What do you think is the responsibility of policymakers, right to not go for that big bang technology solution? And to us? Opt for a different, alternative?

 

Albert Fox Cahn  30:02  

Yeah, I think one thing I want to make clear, like we see the same technologies being deployed all around the world, being sold in nearly every country, raising the same sorts of concerns, and yet we see governments responding very differently, and this book really is highlighting a systemic failure of American policymakers, where we have a really poor track record of investing in these high tech bridges to nowhere, chasing after the innovation that can't actually solve the problems we need to take on, and ignoring the much more pragmatic upgrade, alternative.

 

Shira Ovide  30:43  

Do you think, you know, I had a conversation with a public health researcher maybe three or four years ago about E cigarettes, about vaping, and one thing you said that stuck with me was, you know, in other countries, nobody knew, including in the United States, what the health implications were of E cigarettes, right? What, where, where they would could do good for smoking cessation, and where they might do harm. And in the way that it that other countries, other governments, approached this was they said, We're not going to let this in our country until we see kind of provable benefit. And we didn't do that in the United States. We started to regulate them or try to regulate them after vaping had already become a public phenomenon among young people. So should we do that in the United States? I mean, it feels anathema, but should we say, No, Uber can't launch here. No, we're not going to use AI chatbots in schools. We're not going to allow prediction markets right to sort of bet on who's going to die and the Iran war, that kind of thing should we do that?

 

Albert Fox Cahn  31:41  

Yeah, I mean, also, it's like, freaky that those are even things we're having to wrestle with. Like, I feel like we're living more, more of a black mirror episode some days. Look, I'm not sure if the answer is, on the one hand, to take more of a European regulatory approach, where you're approving these things ex ante, or, you know more, you know traditional American approach of just suing people after the fact. I love suing people, so I tend to gravitate in that direction. But I think either way, I think that there's this sort of collective failure to appreciate the risks of a lot of the things we're we're doing, and part of that is because right now, we're both failing to regulate a lot of these technologies, and in the case of AI, we're making it really hard for people to sue when they get hurt. And so we're in this space where a lot of people feel powerless and feel like there really aren't any consequences when these technologies upend our lives and there's seemingly no accountability,

 

Shira Ovide  32:46  

and they're right, right? There is no accountability, and they should feel helpless.

 

Albert Fox Cahn  32:49  

Yeah, this is where I'll put in a not so subtle plug for the work that stop and other civil rights groups are doing. But you're right. There hasn't been accountability in recent decades, and I think that's one of the tasks ahead of us, but I also think that part of it for us is just voters, as consumers, as neighbors, as people who see this sort of ethos getting introduced into our lives. Is to say whether it's the, you know, the marketing, marketing team that comes in and says, Oh, we have to have a chat bot as part of every product we launch, or the city council meeting where they say, oh, we need to make sure that we educate every kid using AI. We need a bit more of that collective skepticism to say, well, what are the upgrades we can focus on instead?

 

Shira Ovide  33:38  

Okay, if folks have questions, I guess, like, raise your hand, stand up. I don't know. Yeah, yes, please. Oh, microphone user, well done. How do we

 

Speaker 1  33:49  

differentiate between upgrades and techno solutions, whether it's an mRNA vaccine or a GLP one or something, that may or may not be better or worse.

 

Albert Fox Cahn  34:01  

Amazing question. So as we detail in the book, another shameless plug with there, there are a few classic warning signs. One is the proportional benefit. So with innovations, they're often sold to us with this idea that you're getting this moon shot return, but you don't really need it. Doesn't really cost anything. You know, this sense of it's too good to be true, therefore, it probably isn't. Another thing is just looking at whether this is an extension of a proven practice or a complete departure from it. I think that some of the most problematic upgrades have been when we're investing in technologies that are, you know, taking a completely different approach than anything we've done before, and yet they're telling us that it will work much more effectively. But we all know that the more you're departing from the status quo, the more questions it raises about how well things will go. I think, really. Really, you know, I think part of it, at its core, is we need to lower our expectations for what a realistic solution looks like, which I think is a hard thing to sell in the face of American optimism, because so much of our political discourse focuses on, what is this amazing thing that can make everything better, but that sort of, you know, moonshot approach constantly keeps getting us, you know, off track.

 

Shira Ovide  35:32  

So do you think that this tendency in the United States to go for these kind of moon shot that's is that born fundamentally of optimism.

 

Albert Fox Cahn  35:41  

I think it's born out of optimism. I think it's also just born out of our sort of rejection of technocratic decision making. I think that there is this really sober, somber, technocratic approach that a lot of European countries will take in evaluating a lot of these public policy questions, but so much of what we focus on in the US is much more. What can be boiled down to the sales pitch and the slogan and part of that is just radical decentralization, right? We have more than 16,000 police departments all trying to figure out what new police tech they need to invest we have far more school boards than that constantly trying to figure out, well, what technology do we have to introduce in the classroom? And they're basically all left to their own to figure out how to answer these questions. Whereas in a lot of countries, these these processes are a lot more centralized, and so you have a lot more there to sort of keep the charlatans at the door. Okay? What else?

 

Speaker 2  36:48  

Yes, microphone, I recently tried and failed to talk a friend out of taking a job at Palantir. The argument that he made was that if Americans don't work at Palantir and advance that mass surveillance technology and data gathering, America won't be able to compete against China, where this is anyway happening, including on American data. And I'll just be curious, like, how would you have talked my friend out?

 

Albert Fox Cahn  37:25  

Friends don't let friends work for Palantir. I mean, I just like, This argument is so old. Yes, the only way we can fight techno authoritarianism abroad is by creating it here at home. No, I There isn't a oppositional force here, right? This isn't nuclear deterrence, where somehow the creation of a domestic arsenal will prevent the deployment of a foreign government's, you know, destructive capability. Instead, this is something that is being deployed here and being deployed here with a destructive impact that no foreign government could have. You know, there is no algorithm that could be developed by Beijing that would have the same, you know, horrific impact as the mobile fortify app when ICE officers are using it to pull people out of cars and homes on, you know, the streets of Minneapolis, or when you see people being disappeared when they're coming back through the airport because of how an algorithm, you know, you know, developed by a talent here is, is deciding their social media poses a threat. So the threat, to me, is the coupling of these technologies with the force of law, with the force of these, you know, paramilitary organizations, and that simply isn't something a foreign government can do. And I think that, quite frankly, this is a narrative I've seen across administrations where people often use the nationalization of this technology as as this, as this way to insulate it from the very real impact it's having here. And I just, I just don't think it logically holds together, but they pay very well

 

Shira Ovide  39:12  

so, and this is a common, right? This is a common, I don't know you call it rationalization, right? But there's a lot of people in AI, in the So, in the so called, like aI safety movement, right? These are people who say, AI must be developed, not to suck in power and work against humans interest. And I think their point is, if we don't build this AI machine, this AI superpower, then somebody with ill intent will do it right? So only we can be trusted to build the singularity, right to build Skynet.

 

Albert Fox Cahn  39:44  

But imagine if you said this with any other technology. Imagine if I said to you, hey, I need to build a coal power plant in your neighborhood. Because if we don't do it here, China might build a coal power plant. This idea that some. How one is opposing the other. And also, at the same time, we see that, you know, there's what, maybe three or four cutting edge AI labs in the US. They're doing these frontier models that are incredibly, incredibly costly and environmentally destructive and not particularly good at actually solving the problems most people have. But then you have, what, twice as many frontier labs in China doing this exact same work, basically just a few months behind the leading edge here. And with the kicker, they're almost all doing it open source. And so not only do you have this sort of massive AI capability being developed in China. It's basically free for anyone to copy, and so I don't understand again, the answer here seems quite simple. If the fear is having too much of this destructive AI technology in the world, the response isn't to have more of it.

 

Shira Ovide  41:05  

Yes, in the back there

 

Speaker 3  41:09  

is another like way to frame what you're saying is, like, at a very high level, that when we're assessing kind of these technological moon shots versus the other options that people are just doing really lousy cost benefit analyzes, and all you're kind of saying is, no, you have to account for likelihood of success, the risks privacy harm, surveillance harms, and just do really, really good, thorough cost benefit analyzes, and sometimes the technology might win, right? Like if we did compare, this is not really surveillance, but if we compare, like federal reporters to westlaw.com I think westlaw.com would win, but more times than not, you know, there will be surveillance harms. Or what is that kind of what you're saying, or when you

 

Albert Fox Cahn  41:55  

reformulate, I don't know. I think that really gets the heart of it. It's look at the median outcome. Don't look at the best case. And I think we keep with technology. I think with almost every other area of life, we tend to do this instinctively. We look at what is the median case? What is the thing that is the most likely outcome? Let's compare that sort of of you know situation. But with these technologies, we let our imaginations run wild and focus on just the best possible outcome, and it's to our constant detriment. Because not only does it mean we invest in things that you know don't come don't actually come through, but we also ignore all the harms they cause in the process.

 

Shira Ovide  42:44  

We have time for one more sure. Okay,

 

Speaker 4  42:47  

so when, whenever I bring these things up to people, I'm always accused of being an enemy of progress. You're an enemy of progress. What do you say to someone who says that? How do you shut that down? I've never figured that out.

 

Albert Fox Cahn  43:01  

I mean, I feel like I should have a T shirt that says enemy of progress. But no, I think it's like, I think there is this false binary between the status quo and the just un acceptable idea that we have to simply take all of these broken systems on the terms they operate today, or investing in this, you know, just unrelenting form of techno solutional capitalism. I think there is this middle ground of saying, No, I you know, I'm not opposed to progress. I just want, you know, progress that will actually work, you know. And I think it's sort of people, people move on so quickly from these tech disasters. Like, one thing we didn't touch on even the metaverse, like, remember how recently people were investing 10s of billions of dollars in the idea that none of us would be here. We wouldn't need classrooms. We wouldn't do any of this in person, because we would be in the metaverse. That was like, less than, what, less than 10 years ago. And it's all been written five years

 

Shira Ovide  44:09  

ago. Oh, my God, years ago. Yeah.

 

Albert Fox Cahn  44:11  

And, and the idea that we could see, you know, the largest companies in the world, you know, so the largest governments in the world, all the people who we turn to to help shape the future, investing in this thing that at the time, at the time, critics were pointing out, this is why it will never work. And I think that's the other thing that I really think is important in the book, we try to avoid just the, you know, the backseat driving on this decision making. Instead, we're looking at what were critics saying at the time in each one of these cases, people were saying and yeah, I picked a few examples where I was the critic saying this, oh. People were saying, this is not going to work. There isn't this is why the technology is doomed to fail. But our incentive structure, our public discourse, our procurement practices, they're set up to silence the detractors and elevate the optimist, and that's just something that, you know, I'm hoping bit by bit, we can collectively change. I'm just

 

Shira Ovide  45:23  

gonna take one more from your mentor there. Wouldn't, wouldn't want to shut you out.

 

Speaker 5  45:29  

I'm afraid my question either calls for, I haven't thought about that, or too long an answer, but I'll just pose it. I was put in mind as you were talking. I'm really looking forward to reading that book. I was put in mind of a course I took in college in order to avoid an actual science requirement in history of science. And we read Thomas Kuhn and the nature of scientific revolutions that I learned about paradigm shifts. And I'm wondering whether this moment, whether you've given any thought to any historical lessons that we might draw about this moment.

 

Albert Fox Cahn  46:13  

I mean, I think that a big issue I see at the heart of this is not. It's not about the nature of technology. It's the nature of how we invest in technology. And I think it's the venture capital life cycle at the heart of so many of the technologies we're talking about, and the idea that a success case looks like a 20x multiplier in two years on something that is just an idea today, but it's supposedly going to be a revolutionary product tomorrow. If that is your idea of success, of economic success, then it dramatically narrows the window of what technological success can look like, and skews us away from the model of technological development that we've had for so long. Like, yes, we've always had this idea of people who were, you know, great innovators, and that always had this outsized notion in the public discourse. But I think we've also, throughout had this far better appreciation for the tinkerers in the labs, working day in day out, to sort of inch forward the frontier of knowledge and better refine our understanding of the world. That's not something a venture capitalist can can invest in. I think that's the sort of upgrade mindset that we all need.

 

Shira Ovide  47:38  

Okay, Albert, thank you, and thank you to Evan in absentia for also collaborating on this book. Thank you very

 

Katherine Strandburg  47:49  

much. Thank you all for coming. And I wanted to thank two people who are not here right now, but are downstairs waiting for all of us to come down and have nice drinks and food and things like that down straight downstairs. Make sure if you see them, Nicole arst and anise, they were totally responsible for putting this whole thing actually together. So they're why you're here, and we're very happy you're here, and we'll see you downstairs,

 

Announcer  48:22  

the engelberg center. Live podcast is a production of the engelberg center on innovation Law and Policy at NYU Law, and is released under a Creative Commons, Attribution, 4.0 International license. Our theme music is by Jessica Batke and is licensed under a Creative Commons, Attribution, 4.0 international license you.