Everywhere, AI is breaking. And everywhere, it’s breaking us.
The breaking ensues whenever AI encounters ambiguity or volatility. And in our hazy, unstable world, that’s all the time: Either the data can be interpreted another way or it’s obsolesced by new events. At which point, AI finds itself looking at life through errant eyes, seeing left as right or now as yesterday. Yet because AI lacks self-awareness, it doesn’t realize that its worldview has cracked. So, on it whirs, unwittingly transmitting the fracture to all the things plugged into it. Cars are crashed. Insults are hurled. Allies are auto-targeted.
This breaks humans in the direct sense of harming, even killing, us. But it has also begun breaking us in a more subtle way. AI can malfunction at the mildest hint of data slip, so its architects are doing all they can to dampen ambiguity and volatility. And since the world’s primary source of ambiguity and volatility is humans, we have found ourselves aggressively stifled. We’ve been forced into metric assessments at school, standard flow patterns at work, and regularized sets at hospitals, gyms, and social-media hangouts. In the process, we’ve lost large chunks of the independence, creativity, and daring that our biology evolved to keep us resilient, making us more anxious, angry, and burned out.
If we want a better future, we need to pursue a different remedy to AI’s mental fragility. Instead of remaking ourselves in AI’s brittle image, we should do the opposite. We should remake AI in the image of our antifragility.
Durability is simply resisting damage and chaos; antifragility is getting stronger from damage and smarter from chaos. This can seem more magic than mechanical, but it’s an innate capacity of many biological systems, including human psychology. When we’re kicked in the face, we can bounce back tougher with courage. When our plans collapse, we can rally for the win with creativity.
Building these antifragile powers into AI would be revolutionary. (Disclosure: Angus Fletcher is currently advising AI projects, which include antifragile AI, within the US Department of Defense). We can achieve the revolution, if we upend our current way of thinking.
First, we must banish the futurist delusion that AI is the smarter version of ourselves. AI’s method of cogitation is mechanically distinct from human intelligence: Computers lack emotion, so they can’t literally be courageous, and their logic boards can’t process narrative, rendering them incapable of adaptive strategy. Which means that AI antifragility won’t ever be human, let alone superhuman; it will be a complementary tool with its own strengths and weaknesses.
We then must step toward heresy by acknowledging that the root source of AI’s current fragility is the very thing that AI design now venerates as its high ideal: optimization.
Optimization is the push to make AI as accurate as possible. In the abstract world of logic, this push is unambiguously good. Yet in the real world where AI operates, every benefit comes at a cost. In the case of optimization, the cost is data. More data is required to improve the precision of machine-learning’s statistical computations, and better data is necessary to ensure that the computations are true. To optimize AI’s performance, its handlers must intel-gather at scale, hoovering up cookies from apps and online spaces, spying on us when we’re too oblivious or exhausted to resist, and paying top dollar for inside information and backroom spreadsheets.
Most PopularThe End of Airbnb in New YorkBusiness
This incessant surveillance is antidemocratic, and it’s also a loser’s game. The price of accurate intel increases asymptotically; there’s no way to know everything about natural systems, forcing guesses and assumptions; and just when a complete picture is starting to coalesce, some new player intrudes and changes the situational dynamic. Then the AI breaks. The near-perfect intelligence veers into psychosis, labeling dogs as pineapples, treating innocents as wanted fugitives, and barreling eighteen-wheelers into kindergarten busses that it sees as highway overpasses.
The dangerous fragility inherent to optimization is why the human brain did not, itself, evolve to be an optimizer. The human brain is data-light: It draws hypotheses from a few data points. And it never strives for 100 percent accuracy. It’s content to muck along at the threshold of functionality. If it can survive by being right 1 percent of the time, that’s all the accuracy it needs.
The brain’s strategy of minimal viability is a notorious source of cognitive biases that can have damaging consequences: close-mindedness, conclusion jumping, recklessness, fatalism, panic. Which is why AI’s rigorously data-driven method can help illuminate our blindspots and debunk our prejudices. But in counterbalancing our brain’s computational shortcomings, we don’t want to stray into the greater problem of overcorrection. There can be enormous practical upside to a good enough mentality: It wards off perfectionism’s destructive mental effects, including stress, worry, intolerance, envy, dissatisfaction, exhaustion, and self-judgment. A less-neurotic brain has helped our species thrive in life’s punch and wobble, which demands workable plans that can be flexed, via feedback, on the fly.
These antifragile neural benefits can all be translated into AI. Instead of pursuing faster machine-learners that crunch ever-vaster piles of data, we can focus on making AI more tolerant of bad information, user variance, and environmental turmoil. That AI would exchange near-perfection for consistent adequacy, upping reliability and operational range while sacrificing nothing essential. It would suck less energy, haywire less randomly, and place less psychological burdens on its mortal users. It would, in short, possess more of the earthly virtue known as common sense.
Here’s three specs for how.
Building AI to Brave Ambiguity
Five hundred years ago, Niccolò Machiavelli, the guru of practicality, pointed out that worldly success requires a counterintuitive kind of courage: the heart to venture beyond what we know with certainty. Life, after all, is too fickle to permit total knowledge, and the more that we obsess over ideal answers, the more that we hamper ourselves with lost initiative. So, the smarter strategy is to concentrate on intel that can be rapidly acquired—and to advance boldly in the absence of the rest. Much of that absent knowledge will prove unnecessary, anyway; life will bend in a different direction than we anticipate, resolving our ignorance by rendering it irrelevant.
We can teach AI to operate this same way by flipping our current approach to ambiguity. Right now, when a Natural Language Processor encounters a word—suit—that could denote multiple things—an article of clothing or a legal action—it devotes itself to analyzing ever greater chunks of correlated information in an effort to pinpoint the word’s exact meaning.
Most PopularThe End of Airbnb in New YorkBusiness
This is “closing the circle.” It leverages big data to tighten a circumference of possibilities to a single dot. And 99.9 percent of the time, it works: It correctly concludes that the word suit is part of a judge’s email to counsel. The other 0.1 percent of the time, the AI snaps. It misidentifies a diving suit as a lawyerly conversation, tightening the loop to exclude the actual truth and plunging into an ocean that it thinks is a courtroom.
Let the circle stay big. Instead of designing AI to prioritize resolving ambiguous data points, we
can program it to perform quick-and-dirty recalls of all possible significations–and to then carry those branching options onto its subsequent tasks, like a human brain that continues reading a poem with multiple potential interpretations held simultaneously in mind. This saves the data intensiveness that traditional machine-learning pours into optimization. In many cases, the ambiguity will get flushed from the system by downstream events: Maybe every executed query resolves identically with either meaning of suit; maybe the system gains access to an email that refers to a lawsuit about a diving suit; maybe the user realizes that (in a typically unpredictable human maneuver) she mistyped suite.
Worst case, if the system encounters a situation where it can’t proceed unless the ambiguity is addressed, it can pause to request human assistance, tempering valor with timely discretion. And in any and all causes, the AI won’t break itself, self-destructing (via a digital version of anxiety) into making unnecessary errors because it’s so stressed about being perfect.
Marshal Data in Support of Creativity
The next big contributor to antifragility is creativity.
Current AI aspires to be creative via a big-data leverage of divergent thinking, a method conceived 70 years ago by Air Force Colonel J.P. Guilford. Guilford succeeded, insofar as he managed to reduce some creativity to computational routines. But because most biological creativity, as subsequent scientific research has shown, involves data-free and nonlogical processes, divergent thinking is far more conservative in its outcomes than human imagination. Although it can spam out gigantic quantities of “new” works, those works are confined to mix-and-matches of earlier models, so what divergent thinking gains in scale it sacrifices in scope.
The practical limitations of this information-powered, robo-formula for imagineering can be witnessed in text and image generators such as GPT-3 and ArtBreeder. By using historical sets to brainstorm, these AI lard their concoctions with expert bias, so that while striving to produce the next van Gogh, they instead emit pastiches of every painter before. The knock-on result of such pseudo-invention is a culture of AI design that categorically misunderstands what innovation is: FaceNet’s “deep convolutional network” is hailed as a breakthrough over previous facial recognition software when it’s more of the same brute-force optimization, like tweaking a car’s torque band to add horsepower—and calling it a revolution in transportation.
The antifragile alternative is to flip from using data as a source of inspiration to using it as a source of falsification. Falsification is the brainchild of Karl Popper, who, ninety years ago, in his Logic of Scientific Discovery, pointed out that it’s more logical to mobilize facts to knock out ideas than to confirm them. When translated onto AI, this Popperian reframe can invert data’s function from a mass-generator of trivially novel ideas into a mass-destroyer of anything except wildly unprecedented ones.
Most PopularThe End of Airbnb in New YorkBusiness
Rather than smudging together billions of existing priors into an endless déjà vu of the mildly new, tomorrow’s antifragile computers could trawl the world’s ever-growing flood of human creations to identify today’s unappreciated van Goghs. Imagine a Pulitzer AI that inputs the winning photographs selected by the panel of human judges—then awards its prize to the news photo that most defies the panel’s expectations.
And in the future, AI could be trained to do the same with its own creations. In place of the high-data ideation method of GPT-3’s ilk, it could harness low-data methods that fling out mostly incoherence but, a tiny fraction of the time, hit upon a genuine original. With falsification, the future-AI could detect that fraction, plucking The Starry Night from a galaxy of nonsense.
The Importance of AI-Human Hybridity
In our here and now, the world’s most antifragile intelligence is human psychology. So, why not give AI the full benefits of our brains? Why not merge ourselves with it?
Such hybridity, sci-fi as it sounds, doesn’t require us to go full Elon Musk. We can achieve it simply by engineering better AI-human partnerships.
Those partnerships are currently less than the sum of their parts, existing as bad-faith relations in which humans are treated either as glorified babysitters who micromanage AI for poor decisions—or as subalterns who must blindly acquiesce to AI’s inscrutable auto-updates. The former tips the human brain into a numbingly boring, right/wrong mode of cognition that kills the neural root of creativity. And the latter destroys our independence and renders us passive to a secretive, bean-counting apparatus that reduxes the USSR’s Central Statistics Administration.
We can troubleshoot this dystopian union by regearing the collaboration between AI and its human users, starting with three instant fixes.
First, equip AI to identify when it lacks the data required for its computations. Rather than designing AI that strives to be right all the time, design AI that identifies when it cannot be right. To do this is to give AI the deep wisdom of Know Thyself, not by making AI literally self-aware, but by providing it with an insentient mechanism for detecting its own limit of competency. That limit can’t be identified in real-time by AI’s human users. Our brain is incapable of processing data at a computer’s voluminous speed, dooming us to always intervene too late when a clueless algorithm thinks it’s omniscient. But by programming the fool to spot itself, we can train AI to hand over control before it races into mayhem, creating a path for it to earn authentic trust from human users.
Second, improve the human-AI interface. The push for optimization has created design features that are either opaque (riddled with “black box” algorithms that no computer scientist can fathom) or infantilizing (pre-scripted UX menus that glibly usher cubicle employees down rote decision trees). These features should all be walked back. Black box algorithms should be eliminated entirely; if we don’t know what a computer is doing, it doesn’t either. And rigid button bars that transfer AI’s brittle precision onto users should be replaced with open-ended “big circle” lists where the first option is 70 percent likely, the second is 20 percent likely, the third is 5 percent likely, and so on. If the user doesn’t see a good choice on the list, they can redirect the AI or take manual control, maximizing the operational ranges of both computer logic and human initiative.
Most PopularThe End of Airbnb in New YorkBusiness
Third, decentralize AI by modeling it after the human brain. Just as our brain contains discrete cognitive mechanisms—logic, narrative, emotion—that (like in a Constitutional separation of powers) check-and-balance one another, so can a single AI be designed to combine different inference architectures (for example, neural networks and symbolic GOFAI). This makes the AI less fragile by allowing it to exit dead-end protocols. If a deep-learning backpropagation can’t access the data it needs, the system can transition to if-then procedures. And by enabling AI to view life through multiple epistemologies, decentralization also invests AI-human partnerships with greater antifragility: Rather than concentrating monomaniacally on its own internal optimization strategies, AI can look outward to learn from anthropological cues. If a self-driving algorithm triggers a baffled frown (or some other sign of confusion) in a human user, the AI can flag the algorithm as potentially suspect, so that instead of forcing us to one-way adapt to its performance quirks, it adapts to our psychology too.
These blueprints aren’t Neurolinks, Artificial General Intelligence, or some other quixotic technology. They’re design innovations that we can implement, now.
All they require is the courage to leave behind big data and its false promise of perfect intelligence. All they require is accepting that, in our uncertain and ever-changing world, it’s smarter to be creatively adequate than optimally accurate. Because it’s better to bounce back than break.
More Great WIRED Stories📩 The latest on tech, science, and more: Get our newsletters!Welcome to Miami, where all your memes come true!Bitcoin's libertarian streak meets an autocratic regimeHow to start (and keep) a healthy habitNatural history, not technology, will dictate our destinyScientists settled family drama using DNA from postcards👁️ Explore AI like never before with our new database💻 Upgrade your work game with our Gear team’s favorite laptops, keyboards, typing alternatives, and noise-canceling headphones