23.3 C
New York
Wednesday, July 24, 2024

The AI Doomsday Bible Is a Book About the Atomic Bomb

In December 1938, two German chemists working on the ground floor of a grand research institute in suburban Berlin accidentally ushered the nuclear era into existence. The chemists, Otto Hahn and Fritz Strassmann, weren’t working on a bomb. They were bombarding uranium with radiation to see what substances this process created—just another experiment in a long string of assays trying to figure out the strange physics of the radioactive metal.

What Hahn and Strassman ended up discovering was nuclear fission—splitting uranium atoms in two and releasing the enormous energy locked up within the atomic nucleus. To nuclear physicists, the implications of this strange experiment were immediately obvious. In January 1939, the Danish physicist Niels Bohr carried the news across the Atlantic to a conference in Washington, DC, where scientists were stunned by the findings. A few weeks later, on his blackboard at the University of California, Berkeley’s Radiation Laboratory, J. Robert Oppenheimer sketched the first crude drawing of an atomic bomb.

"It is a profound and necessary truth that the deep things in science are not found because they are useful. They are found because it was possible to find them," Oppenheimer said long after the bombs he helped create were dropped on Hiroshima and Nagasaki. The story of how the atomic bomb came into being is also of intense interest to another group of scientists grasping after deep things with unknown consequences: artificial intelligence researchers. The definitive telling of that story is Richard Rhodes’ Pulitzer Prize–winning The Making of the Atomic Bomb, first released in 1986. The 800-page tome has become something of a sacred text for people in the AI industry. It’s a favorite among employees at Anthropic AI, the makers of the ChatGPT-style chatbot Claude. Charlie Warzel at The Atlantic described the book as “a kind of holy text for a certain type of AI researcher—namely, the type who believes their creations might have the power to kill us all.” The quest to create all-powerful AIs just might be the 21st century’s version of the Manhattan Project, a queasy parallel that hasn’t escaped the attention of Oppenheimer director Christopher Nolan, either.

AI researchers can see themselves in the story of a small community of scientists who find that their work could shape the future trajectory of humankind for better or worse, says Haydn Belfield, a researcher at the University of Cambridge who focuses on the risks posed by artificial intelligence. “It’s a very, very meaningful story for a lot of people in AI,” he says, “because part of it parallels people's experience, and I think people are quite concerned of repeating the same mistakes that previous generations of scientists have made.”

One key difference between the physicists of the 1930s and today’s artificial intelligence developers is that the physicists immediately thought they were in a race with Nazi Germany. Fission had been discovered by German chemists working under the Third Reich, after all, and the country also had access to uranium mines after annexing parts of Czechoslovakia. The physicist Leo Szilard—who first conceived of the idea of a nuclear chain reaction—convinced Albert Einstein to sign a letter to President Roosevelt warning that if the US didn’t start work on a bomb, it may well find itself behind in a race with the Nazis.

“For every single one of them, the main motivation was to get a nuclear bomb before the Nazis,” Belfield says. But as Rhodes’ book shows, motivations morph as the war goes on. Initially devised as a way of staying ahead of Nazi Germany, the bomb soon became a tool to shorten the war in the Pacific and a way for the US to enter the looming Cold War several steps ahead of the USSR. When it became clear that Nazi Germany wasn’t capable of developing a nuclear weapon, the only scientist to leave Los Alamos on moral grounds was Joseph Rotblat, a Jewish physicist from Poland who later became a prominent campaigner against nuclear weapons. When he accepted the Nobel Peace Prize in 1995, Rotblat chastized “disgraceful” fellow scientists for fueling the nuclear arms race. “They did great damage to the image of science,” he said.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

Artificial intelligence researchers may wonder whether they’re in a modern-day arms race for more powerful AI systems. If so, who is it between? China and the US—or the handful of mostly US-based labs developing these systems?

It might not matter. One lesson from The Making of the Atomic Bomb is that imagined races are just as powerful a motivator as real ones. If an AI lab goes quiet, is that because it’s struggling to push the science forward, or is it a sign that something major is on the way?

When OpenAI released ChatGPT in November 2022, Google’s management announced a code red situation for its AI strategy, and other labs doubled-down on their efforts to bring products to the public. “The attention [OpenAI] got clearly created some level of race dynamics,” says David Manheim, head of policy and research at the Association for Long Term Existence and Resilience in Israel.

More transparency between companies could help head off such dynamics. The US kept the Manhattan Project a secret from the USSR, only informing its ally of its devastating new weapon a week after the Trinity test. At the Potsdam conference on July 24, 1945, President Truman shrugged off his translator and sidled over to the Soviet premier to tell him the news. Joseph Stalin seemed unimpressed by the revelation, only saying that he hoped the US would make use of the weapon against the Japanese. In lectures he gave nearly 20 years later, Oppenheimer suggested that this was the moment the world lost the chance to avoid a deadly nuclear arms race after the war.

In July 2023, the White House secured a handful of voluntary commitments from AI labs that at least nodded toward some element of transparency. Seven AI companies, including OpenAI, Google, and Meta, agreed to have their systems tested by internal and external experts before their release and also to share information on managing AI risks with governments, civil society, and academia.

But if transparency is crucial, governments need to be specific about the kinds of dangers they’re protecting against. Although the first atomic bombs were “of unusual destructive force”—to use Truman’s phrase—the kind of citywide destruction they could wreak was not wholly unknown during the war. On the nights of March 9 and 10, 1945, American bombers dropped more than 2,000 tons of incendiary bombs on Tokyo in a raid that killed more than 100,000 residents—a similar number as were killed in the Hiroshima bombing. One of the main reasons why Hiroshima and Nagasaki were chosen as the targets of the first atomic bombs was that they were two of the few Japanese cities that had not been utterly decimated by bombing raids. US generals thought it would be impossible to assess the destructive power of these new weapons if they were dropped on cities that were already gutted.

When US scientists visited Hiroshima and Nagasaki after the war, they saw that these two cities didn’t look all that different from other cities that had been firebombed with more conventional weapons. “There was a general sense that, when you could fight a war with nuclear weapons, deterrence or not, you would need quite a few of them to do it right,” Rhodes said recently on the podcast The Lunar Society. But the most powerful fusion nuclear weapons developed after the War were thousands of times more powerful than the fission weapons dropped on Japan. It was difficult to truly appreciate the amount of stockpiled destruction during the Cold War simply because earlier nuclear weapons were so small by comparison.

There is an order of magnitude problem when it comes to AI too. Biased algorithms and poorly-implemented AI systems already threaten livelihoods and liberty today—particularly for people in marginalized communities. But the worst risks from AI lurk somewhere in the future. What is the real magnitude of risk that we’re preparing for—and what can we do about it?

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

“I think one of our biggest risks is fighting about whether short-term versus long-term impacts are more important when we’re not spending enough time thinking about either,” says Kyle Gracey, a consultant at Future Matters, a nonprofit that trains companies on AI risk reduction. Gracey first picked up The Making of the Atomic Bomb when they were at college, and was struck by the sheer size and strength of the communities that went into building the atomic bomb—scientists, sure, but also families, laborers, and supporters who worked on the project. Gracey sees the real AI race as one to build a safety community that extends way beyond just scientists.

That might mean bridging the gap between different kinds of people who worry about AI. Short- and long-term AI risks are not entirely separate beasts. It was no accident that most of those killed by the atomic bombs were civilians. Aerial bombing of civilians didn’t start in WWII, but this devastating mode of warfare took hold as the war went on. Strategic bombing raids on military sites in England slowly morphed into the Blitz as daylight attacks became impossible for the Luftwaffe. Allied bombers responded with huge raids on German cities, and later total bombing campaigns across Japan. With each new attack, the devastation rained on civilian populations ratcheted up another sickening notch. The Twentieth Air Force’s bombing directive for Japanese cities had the “prime purpose” of “not leaving one stone lying on another.”

When the bomb came on the scene, there was little doubt it would be used against civilian targets. There were simply no military targets left that merited a weapon of such magnitude. And, besides, it was a natural continuation of a war where civilian deaths outnumbered military deaths by something like a ratio of 2:1. The bomb was a technological leap when it came to delivering destruction, but the conceptual leap to relentless war on non-combatants had been made years earlier. Although we don’t know the capacities of future artificial intelligent systems, we can—and should—think very carefully when we dismiss present-day concerns about AI threatening the jobs of low-income workers or undermining trust in elections and institutions.

Getting angry about these developments doesn’t mean you hate AI—it means you’re worried about the fate of your fellow humans. Nolan, who has spent a lot of time thinking about AI and the bomb as of late, made a similar point in a recent interview with WIRED. “If we endorse the view that AI is all-powerful, we are endorsing the view that it can alleviate people of responsibility for their actions—militarily, socio­economically, whatever,” he said. “The biggest danger of AI is that we attribute these godlike characteristics to it and therefore let ourselves off the hook.” Nuclear fission was always out there to be discovered, but the decision to use it to kill humans is squarely on human shoulders.

There is another reason why AI researchers might be so interested in Rhodes’ book: It depicts a group of youngish, nerdy scientists working on a mission of world-changing significance. As much as some AI developers fear that their creations might destroy the world, many also think they’ll unleash creativity, supercharge economies, and release people from the burden of inane work. “You are about to enter the greatest golden age,” OpenAI CEO Sam Altman told young people at a talk in Seoul in June. Or it could kill us all.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

The scientists who made the atomic bomb recognized the duality of their situation too. Niels Bohr, who carried the news of the fission experiment across the Atlantic, thought that the discovery might lead to an end of war. The physicist is the moral conscience that runs through Rhodes’ book. He sensed that this radical new technology could be the key to a better world, if only politicians embraced openness before an arms race set in. In 1944, Bohr met President Roosevelt and suggested that the US approach the Soviet Union to try and broker some kind of agreement over the use of nuclear weapons. Later that year, he made a similar entreaty to Winston Churchill.

The British prime minister was not so receptive to Bohr’s ideas. “The President and I are much worried about Professor Bohr,” Churchill wrote in a memo after meeting the scientist. “It seems to me that [he] ought to be confined or at any rate made to see that he is very near the edge of mortal crimes.” Churchill was disturbed by the idea that the Allies would share news of the bomb before its terrifying destructive power had been proven in battle—least of all with their soon-to-be enemy, the USSR. Bohr was never invited to meet again with the president or the prime minister. Of the two possible futures envisaged by the scientist, the world would head down the path he feared most.

Related Articles

Latest Articles