23.2 C
New York
Friday, July 12, 2024

Inside the Senate’s Private AI Meeting With Tech’s Billionaire Elites

US senators are proving slow studies when it comes to the generative artificial intelligence tools that are poised to upend life as we know it. But they’ll be tested soon—and the rest of us through them—if their new private tutors are to be trusted.

In a historic first, yesterday upwards of 60 senators sat like school children—not allowed to speak or even raise their hands—in a private briefing where some 20 Silicon Valley CEOs, ethicists, academics, and consumer advocates prophesied about AI’s potential to upend, heal, or even erase life as we knew it.

“It’s important for us to have a referee,” Elon Musk, the CEO of Tesla, SpaceX, and X (formerly Twitter), told a throng of paparazzi-like press corps waiting on the sidewalk outside the briefing. “[It] may go down in history as very important to the future of civilization.”

The weight of the moment is lost on no one, especially after Musk warned senators inside the room of the “civilizational risks” of generative AI.

As many senators grapple with AI basics, there’s still time to influence the Senate’s collective thinking before lawmakers try to do what they’ve failed to do in recent years: regulate the emerging disruptive tech.

Inside the briefing room there was consensus on the dais that the federal government’s regulatory might is needed. At one point, Senate Majority Leader Chuck Schumer, the New York Democrat who organized the briefing, asked his assembled guests, “Does the government need to play a role in regulating AI?”

“Every single person raised their hand, even though they had diverse views,” Schumer continued. “So that gives us a message here: We have to try to act, as difficult as the process may be.”

The raising of diverse hands felt revelatory to many.

“I think people all agreed that this is something that we need the government’s leadership on,” said Sam Altman, CEO of OpenAI, the maker of ChatGPT. “Some disagreement about how it should happen, but unanimity [that] this is important and urgent.”

The devilish details are haunting, though. Because generative AI is so all-encompassing, a debate over regulating it can quickly expand to include every divisive issue under the sun, something that was on display in the briefing right alongside the show of unity, according to attendees who spoke to WIRED.

To the surprise of many, the session was replete with specifics. Some attendees brought up their need for more highly skilled workers, while Microsoft cofounder Bill Gates focused on feeding the globe’s hungry. Some envision a sweeping new AI agency, while others argue that existing entities—like the National Institute of Standards and Technology (NIST), which was mentioned by name—are better suited to regulate in real time (well, AI time).

“It was a very good pairing. Better than I expected,” says Senator Cynthia Lummis, a Wyoming Republican who attended the briefing. “I kind of expected it to be a nothingburger, and I learned a lot. I thought it was extremely helpful, so I’m really glad I went. Really glad.”

Most PopularGearApple’s $60 iCloud Service Is the Future of Apple

Lauren Goode

GearApple’s iPhone 15 Marks a New Era

Julian Chokkattu

GearEverything Apple Announced Today

Boone Ashworth

ScienceNo, This Is Not an Alien. Here’s Why

Anna Lagos

Like many in the room, Lummis’ ears perked when a speaker called out Section 230 of the 1996 Communications Decency Act—a legislative shield that protects tech firms from liability for content users publish on their social media platforms.

“One of the speakers said, ‘Make users and creators of the technology accountable, not immune from liability,’” Lummis says, reading from her exhaustive hand-scribbled notes. “In other words, he specifically said, ‘Do not create a Section 230 for AI.’ Lummis adds that the speaker who proposed this—she didn’t identify him— “was sitting next to [Meta CEO Mark] Zuckerberg and he said it—one or two seats away, which I thought was fascinating.”

Beyond the diverse opinions of lawmakers, there were also disagreements among the experts invited to speak at the private briefing. The forum’s attendees and other tech leaders are talking about building and expanding on gains from AI, but many Latinos still lack broadband internet access, says attendee Janet Murguía, president of Hispanic civil rights organization UnidosUS. That reality underscores how “existing infrastructure gaps keep us from being at the front door of AI,” she says.

Murguía wants lawmakers to think about the needs of the Hispanic community to prioritize job training, fight job displacement, and guard against “surveillance that gets away from the values of our democracy.” In particular, she mentioned AI-driven tools like geolocation tracking and face recognition, pointing to a report released earlier this week that found federal law enforcement agencies that are using face recognition lack safeguards to protect people’s privacy and civil rights.

The resounding message she heard from tech CEOs was a desire for US leadership in AI policy. “Whether it was Mark Zuckerberg or Elon Musk or Bill Gates or [Alphabet CEO] Sundar Pichai, there was a clear resonance that the US must take the lead in AI policy and regulation,” she says.

Murguía was glad to see women like Maya Wiley from the Leadership Conference on Civil and Human Rights and union leaders at the forum, representation she called impressive and historic. But she wants to see people from more segments of society in the room at the next forum, saying, “We can’t have the same small circle of folks that aren’t diverse making these decisions.”

In her remarks during yesterday’s briefing, American Federation of Teachers president Randi Weingarten highlighted WIRED reporting that $400 can bankroll a disinformation campaign. Later, Tristan Harris from the Center for Humane Technology talked about how $800 and a few hours of work stripped Meta’s Llama 2 language model of safety controls and made it share instructions on how to make a biological weapon.

“It’s like we were having a debate about how little it costs to ruin the world,” Weingarten says, pointing to Musk’s comment about how AI could spell the end of civilization.

Weingarten credits Schumer for bringing people together at a critical moment in history, when there’s tremendous potential for AI to do good for humanity and tremendous potential to undermine democracy and human decision-making. Teachers and students deserve protections from inequality, identity theft, disinformation, and other harms that AI can fuel, she says, and meaningful federal legislation should protect privacy and seek to resolve issues like job displacement.

Most PopularGearApple’s $60 iCloud Service Is the Future of Apple

Lauren Goode

GearApple’s iPhone 15 Marks a New Era

Julian Chokkattu

GearEverything Apple Announced Today

Boone Ashworth

ScienceNo, This Is Not an Alien. Here’s Why

Anna Lagos

“We want the responsibility to keep up with the innovation and think that that is what makes the innovation sustainable, like commercial air and passenger airlines. The innovation would not have been sustainable without a real commitment to safety,” says Weingarten.

Ahead of the forum, Inioluwa Deb Raji, a UC Berkeley researcher, argued that the most reliable experts on real-world harms caused by AI come from outside corporations. She told WIRED she was thankful she was in the room to reiterate her opinion.

A few times, she heard people argue that the reason major AI companies and the Biden administration had agreed corporations could lead voluntary commitments to assess AI systems before deployment was because those companies had built the technology and therefore understand it best.

She said perhaps that’s true, but hearing from people impacted by AI systems and examining how they’re affected offers another form of valid and important expertise that can inform regulation of AI and help develop standards. She knows from experience auditing AI systems for years that these systems don’t always work very well and can fail in unexpected ways and endanger human lives. The work of independent auditors, she argued during the briefing, opens things up to more investigation by civil society.

“I’m glad I could be there to bring up some noncorporate talking points, but I wish I had more backup,” Raji says.

Some commonly known tensions came up, such as whether open- or closed-source AI is best, and the importance of addressing the ways AI models that exist today harm people, rather than only looking at existential risks that don’t exist yet. While Musk, who signed a letter in favor of a pause in AI development earlier this year, talked about the possibility of AI wiping out civilization, Raji criticized Tesla’s Autopilot AI, which has faced criticism following passenger deaths.

“Maybe I should have cared a little more about the independent wealth of people sitting two steps away from me, but I feel like it wasn’t that intimidating because I knew that they were repeating points that I’ve heard before from corporate representatives at these companies about these exact same topics, so I had a sense of what to expect,” she says.

Despite some disagreements, Raji says, some of the strongest and most surprising moments of the meeting occurred when consensus emerged that government regulation of AI is necessary. Those moments made it seem there may be a path to bipartisan legislation. “That was actually pretty educational for me, and probably for the senators,” she says.

There’s still an aversion to new regulations among many Republicans, which is why Senate Commerce chair Maria Cantwell, a Democrat from Washington state, was struck by how Microsoft CEO Satya Nadella framed the challenge.

“‘When it comes to AI, we shouldn’t be thinking about autopilot—like, you need to have copilots,'" Cantwell says, paraphrasing Nadella's comments. "So who’s going to be watching, you know, this activity and making sure that it’s done correctly?” 

Most PopularGearApple’s $60 iCloud Service Is the Future of Apple

Lauren Goode

GearApple’s iPhone 15 Marks a New Era

Julian Chokkattu

GearEverything Apple Announced Today

Boone Ashworth

ScienceNo, This Is Not an Alien. Here’s Why

Anna Lagos

While all the CEOs, union bosses, and civil rights advocates were asked to raise their hands at points, one flaw with muzzling senators, according to critics on both sides of the proverbial aisle, is that lawmakers weren’t easily able to game out where their allies are in the Senate. And coalitions are key to compromise.

“There’s no feeling in the room,” says Senator Elizabeth Warren, a Massachusetts Democrat. “Closed-door [sessions] for tech giants to come in and talk to senators and answer no tough questions is a terrible precedent for trying to develop any kind of legislation.”

While Warren sat in the front row—close enough so the assembled saw the whites of her fiery, consumer-focused eyes—other critics boycotted the affair, even as they sought out the throngs of reporters huddled in the halls.

“My concern is that [Schumer’s] legislation is leading to nowhere. I mean, I haven’t seen any indication he’s actually going to put real legislation on the floor. It’s a little bit like with antitrust the last two years, he talks about it constantly and does nothing about it,” says Senator Josh Hawley, a Missouri Republican. “Part of what this is is a lot of song and dance that covers the fact that actually nothing is advancing. The whole fact that it’s not public, it’s just absurd.”

Absurd or not, some inside were placated, in part, because senators were reminded that AI isn’t just our future, it’s been in our lives for years—from social media to Google searches to self-driving cars and video doorbells—without destroying the world.

“I learned that we’re in good shape, that I’m not overly concerned about it,” says Senator Roger Marshall, a Kansas Republican. “I think artificial intelligence has been around for decades, most of it machine learning.”

Marshall stands out as an outlier, though his laissez-faire thinking is becoming in vogue in the GOP, which critics say is due to all the lobbying from the very firms whose leaders were in yesterday’s briefing.

“The good news is, the United States is leading the way on this issue. I think as long as we stay on the front lines, like we have the military weapons advancement, like we have in satellite investments, we’re gonna be just fine,” Marshall says. “I’m very confident we’re moving in the right direction.”

Still, studious attendees left with a renewed sense of urgency, even if that involves first studying a technology few truly understand, including those on the dais. It seems the more senators learn about the sweeping scope of generative AI, the more they recognize there’s no end to the Senate’s new regulatory role.

“Are we ready to go out and write legislation? Absolutely not,” says Senator Mike Rounds, a South Dakota Republican who helped Schumer run the bipartisan AI forums, the next of which will focus on innovation. “We’re not there.”

In what was once heralded as the “world’s greatest deliberative body,” even the timeline for legislation is debatable. “Everyone’s nodding their head saying, ‘Yeah, this is something we need to act on,’ so now the question is, ‘How long does it take to get to a consensus?’” says Senator John Hickenlooper, a Colorado Democrat. “But in broad strokes, I think that it’s not unreasonable to expect to get something done next year.”

Related Articles

Latest Articles