28.7 C
New York
Friday, July 26, 2024

How to Start an AI Panic

Last week the Center for Humane Technology summoned over 100 leaders in finance, philanthropy, industry, government, and media to the Kissinger Room at the Paley Center for Media in New York City to hear how artificial intelligence might wipe out humanity. The two speakers, Tristan Harris and Aza Raskin, began their doom-time presentation with a slide that read: “What nukes are to the physical world … AI is to everything else.”

We were told that this gathering was historic, one we would remember in the coming years as, presumably, the four horsemen of the apocalypse, in the guise of Bing chatbots, would descend to replace our intelligence with their own. It evoked the scene in old science fiction movies—or the more recent farce Don’t Look Up—where scientists discover a menace and attempt to shake a slumbering population by its shoulders to explain that this deadly threat is headed right for us, and we will die if you don’t do something NOW.

At least that’s what Harris and Raskin seem to have concluded after, in their account, some people working inside companies developing AI approached the Center with concerns that the products they were creating were phenomenally dangerous, saying an outside force was required to prevent catastrophe. The Center’s cofounders repeatedly cited a statistic from a survey that found that half of AI researchers believe there is at least a 10 percent chance that AI will make humans extinct.

In this moment of AI hype and uncertainty, Harris and Raskin are breaking the glass and pulling the alarm. It’s not the first time they’re triggering sirens. Tech designers turned media-savvy communicators, they cofounded the Center to inform the world that social media was a threat to society. The ultimate expression of their concerns came in their involvement in a popular Netflix documentary cum horror film called The Social Dilemma. While the film is nuance-free and somewhat hysterical, I agree with many of its complaints about social media’s attention-capture, incentives to divide us, and weaponization of private data. These were presented through interviews, statistics, and charts. But the doc torpedoed its own credibility by cross-cutting to a hyped-up fictional narrative straight out of Reefer Madness, showing how a (made-up) wholesome heartland family is brought to ruin—one kid radicalized and jailed, another depressed—by Facebook posts.

This one-sidedness also characterizes the Center’s new campaign called, guess what, the AI Dilemma. (The Center is coy about whether another Netflix doc is in the works.) Like the previous dilemma, a lot of points Harris and Raskin make are valid—such as our current inability to fully understand how bots like ChatGPT produce their output. They also gave a nice summary of how AI has so quickly become powerful enough to do homeworkpower Bing search, and express love for New York Times columnist Kevin Roose, among other things.

I don’t want to dismiss entirely the worst-case scenario Harris and Raskin invoke. That alarming statistic about AI experts believing their technology has a shot of killing us all, actually checks out, kind of. In August 2022, an organization called AI Impacts reached out to 4,271 people who authored or coauthored papers presented at two AI conferences, and asked them to fill out a survey. Only about 738 responded, and some of the results are a bit contradictory, but, sure enough, 48 percent of respondents saw at least a 10 percent chance of an extremely bad outcome, namely human extinction. AI Impacts, I should mention, is supported in part by the Centre for Effective Altruism and other organizations that have shown an interest in far-off AI scenarios. In any case, the survey didn’t ask the authors why, if they thought catastrophe possible, they were writing papers to advance this supposedly destructive science.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

But I suspect this extinction talk is just to raise our blood pressure and motivate us to add strong guardrails to constrain a powerful technology before it gets abused. As I heard Raskin and Harris, the apocalypse they refer to is not some kind of sci-fi takeover like Skynet, or whatever those researchers thought had a 10 percent chance of happening. They’re not predicting sentient evil robots. Instead, they warn of a world where the use of AI in a zillion different ways will cause chaos by allowing automated misinformation, throwing people out of work, and giving vast power to virtually anyone who wants to abuse it. The sin of the companies developing AI pell-mell is that they’re recklessly disseminating this mighty force. 

For instance, consider one of the slides, among many, that Harris and Raskin shared about AI’s potential harm. It was drawn from a startling study where researchers applied advanced machine-learning techniques to data from brain scans. With the help of AI, researchers could actually determine from the brain scans alone the objects that the subjects were looking at. The message was seemingly clear: In the dystopian AI world to come, authorities will be looking inside our heads! It’s something that Bob Dylan probably didn’t anticipate 50 years ago when he wrote, “If my thought dreams could be seen / they’d probably put my head in a guillotine.” Sitting in the Kissinger Room, I wondered whether certain politicians were sharpening their decapitation blades right now.

But there’s another side to that coin—one where AI is humanity’s partner in improving life. This experiment also shows how AI might help us crack the elusive mystery of the brain’s operations, or communicate with people with severe paralysis.

Likewise, some of the same algorithms that power ChatGPT and Google’s bot, LaMDA, hold promise to help us identify and fight cancers and other medical issues. Though it’s not a prominent theme in the Center’s presentation, the cofounders understand this. In a conversation I had with Raskin this week, he acknowledged that he’s an enthusiastic user of advanced AI himself. He exploits machine learning to help understand the language of whales and other animals. “We're not saying there's not gonna be a lot of great things that come out of it,” he says. Let me use my biological large language model to strip away the double negative—he’s saying there will be a lot of great things coming out of it. 

What’s most frustrating about this big AI moment is that the most dangerous thing is also the most exciting thing. Setting reasonable guardrails sounds like a great idea, but doing that will be cosmically difficult, particularly when one side is going DEFCON and the other is going public, in the stock market sense.

So what’s their solution? The Center wants two immediate actions. First, an AI slowdown, in particular “a moratorium on AI deployment by the major for-profit actors to the public.” Sure, Microsoft, Meta, Google, and OpenAI can develop their bots, but keep them under wraps, OK? Nice thought, but at the moment every one of those companies is doing the exact opposite, terrified that their competitors might get an edge on them. Meanwhile, China is going to do whatever it damn pleases, no matter how scary the next documentary is. 

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

The recommended next step takes place after we’ve turned off the AI faucet. We use that time to develop safety practices, standards, and a way to understand what bots are doing (which we don’t have now), all while “upgrading our institutions adequately to meet a post-AI world.” Though I’m not sure how you do the last part, pretty much all the big companies doing AI assure us they’re already working through the safety and standards stuff.

Of course, if we want to be certain about those assurances, we need accountability—meaning law. No accident that this week, the Center repeated its presentation in Washington, DC. But it’s hard to imagine ideal AI legislation from the US Congress. This is a body that’s still debating climate change when half the country is either on fire, in a drought, flooded by rising sea levels, or boiling at temperatures so high that planes can’t take off. The one where a plurality of members are still trying to wish away the reality of a seditious mob invading their building and trying to kill them. This Congress is going to stop a giant nascent industry because of a bunch of slides

AI’s powers are unique, but the struggle to contain a powerful technology is a familiar story. With every new advance, companies (and governments) have a choice of how to use it. It’s good business to disseminate innovations to the public, whose lives will be improved and even become more fun. But when the technologies are released with zero concern for their negative impact, those products are going to create misery. Holding researchers and companies accountable for such harms is a challenge that society has failed to meet. There are endless cases where human beings in charge of things make conscious choices that safeguarding human life is less important than, say, making a profit. It won’t be surprising if they build those twisted priorities into their AI. And then, after some disaster, claim that the bot did it!

I’m almost tempted to say that the right solution to this “dilemma” is beyond human capability. Maybe the only way we can prevent extinction is to follow guidance by a superintelligent AI agent. By the time we get to GPT-20, we may have our answer. If it’s still talking to us by then. 

Time Travel

Thirty years ago I wrote a book called Artificial Life, about human-made systems that mimicked—and possibly, qualified as—biological entities. Many of the researchers I spoke to acknowledged the possibility that these would evolve into sophisticated beings that might obliterate humanity, intentionally or not. I had a lot of discussions with A-life scientists on that subject and shared some transcripts with the Whole Earth Review, which published them in the fall of 1992. Here’s a bit from an interview with scientist Norman Packard of the Santa Fe Institute.

Steven Levy: I’ve heard it said that this is potentially the next evolutionary step, that we’re creating our successors.Norman Packard: Yeah.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

Which is a pretty heavy thing, right?

It’s sort of like a midlife crisis. It has to happen sometime.

Well, we have to die, that has to happen sometime. But you don’t have to create the next species. It’s never been done on this planet before.

Come on. Things have been replaced by other things for billions of years.

Yeah, but not by things they’ve created.

Not by things they’ve created, no.

If you believe that’s possible, aren’t you worried whether it’s a good idea to do it?

No, I believe very strongly in a fairly fatalistic way of the inevitability of the evolutionary process. 

The fact of evolution is inevitable, but where it goes is not.

My point is really that all-out atomic war and all that junk in the overall evolutionary record, with the timescale of billions of years, is a teeny tiny little blip. The biosphere would get jostled around a little bit, a few of the higher life-forms, like us, for instance, might get totally exterminated for a while, but what the hell, it would keep on going.

Ask Me One Thing

Jay asks,”Do you see a significant cultural backlash to AI products (like the move from digital music to vinyl)? Will products that elevate the human over the machine gain mind and market share?

Good question, Jay. One thing under consideration for regulating AI is a truth-in-labeling rule that declares when a piece of content is produced by a machine. (Like we do at WIRED!) It sounds like a basic right to know this. (We still should keep in mind that just because a human generated something doesn’t mean that it’s accurate, or free of bias, or original.) But over a long period of time, as AI becomes increasingly common—and a lot of things we read, listen to, or watch will be the result of collaboration between humans and bots—those labels might become meaningless. 

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

I do think, however, that we may well cherish a label indicating that a flesh-and-blood person produced something. Think of it as one of those protected appellations that a French wine was grown and harvested in a premium region. As AI gets increasingly better, the stuff made by humans might well be technically inferior to that of our robot Shakespeares and Picassos. But just as we value folk art, funky hand-stitched clothes, and homemade cooking, tagging media produced by Homo sapiens might have a value in itself. But like vinyl, it will probably be priced high and relegated to a niche market.

You can submit questions to mail@wired.com. Write ASK LEVY in the subject line.

End Times Chronicle

If you visit Greenland this winter, bring your bikini.

Last but Not Least

AI algorithms are already being used by governments to make life-changing decisions about people.

Dilemma or not, GPT-Chat is coming after a lot of office jobs.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

What do philosophers do when life loses meaning? This one dosed himself with psychedelics.

Margaret Atwood would like to be a fox. One, we hope, with a typewriter.

Related Articles

Latest Articles