14.2 C
New York
Tuesday, April 16, 2024

The Age of AI Hacking Is Closer Than You Think

This story is adapted from A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend Them Back, by Bruce Schneier.

How realistic is a future of AI hacking?

Its feasibility depends on the specific system being modeled and hacked. For an AI to even begin optimizing a solution, let alone develop a completely novel one, all of the rules of the environment must be formalized in a way the computer can understand. Goals—​known in AI as objective functions—​need to be established. The AI needs some sort of feedback on how well it is doing so that it can improve its performance.

Sometimes this is a trivial matter. For a game like Go, it’s easy. The rules, objective, and feedback—​did you win or lose?—are all precisely specified, and there’s nothing outside of those things to muddy the waters. The GPT-​3 AI can write coherent essays because its “world” is just text. This is why most of the current examples of goal and reward hacking come from simulated environments. Those are artificial and constrained, with all of the rules specified to the AI.

What matters is the amount of ambiguity in a system. We can imagine feeding the world’s tax laws into an AI, because the tax code consists of formulas that determine the amount of tax owed. There is even a programming language, Catala, that is optimized to encode law. Even so, all law contains some ambiguity. That ambiguity is difficult to translate into code, so an AI will have trouble dealing with it. AI notwithstanding, there will be full employment for tax lawyers for the foreseeable future.

Most human systems are even more ambiguous. It’s hard to imagine an AI coming up with a real-​world sports hack like curving a hockey stick. An AI would have to understand not just the rules of the game but also human physiology, the aerodynamics of the stick and the puck, and so on. It’s not impossible, but it would be a lot more difficult than coming up with a novel Go move.

This latent ambiguity in complex societal systems offers a near-​term security defense against AI hacking. We won’t have AI-​generated sports hacks until androids actually play those sports, or until a generalized AI is developed that is capable of understanding the world broadly in all its intersecting dimensions. A similar challenge exists with casino game hacks or hacks of the legislative process. (Could an AI independently discover gerrymandering?) It will be a long time before AIs are capable of modeling and simulating the ways that people work, individually and in groups, before they are as capable as humans are of devising novel ways to hack legislative processes.

But while a world filled with AI hackers is still a science-​fiction problem, it’s not a stupid science-​fiction problem. Advances in AI are coming fast and furious, and jumps in capability are erratic and discontinuous. Things we thought were hard turned out to be easy, and things we think should be easy turn out to be hard. When I was a college student in the early 1980s, we were taught that the game of Go would never be mastered by a computer because of its enormous complexity: not the rules, but the number of possible moves. Today, AIs are Go grandmasters.

So while AI may primarily be tomorrow’s problem, we’re seeing precursors of it today. We need to start thinking about enforceable, understandable, and ethical solutions, because if we can expect anything with AI, it’s that we’ll need those solutions sooner than we might expect.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

Probably the first place to look for AI-​generated hacks is in financial systems, since those rules are designed to be algorithmically tractable. High-​frequency trading algorithms are a primitive example of this, and will become much more sophisticated in the future. We can imagine equipping an AI with all the world’s financial information in real time, plus all of the world’s laws and regulations, plus newsfeeds and anything else we think might be relevant, then assigning it the goal of “maximum legal profit” or maybe “maximum profit we can get away with.” My guess is that this isn’t very far off, and that the result will be all sorts of novel and completely unexpected hacks. And there will probably be some hacks that are simply beyond human comprehension, which means we’ll never realize they’re happening.

In the short term, we’re more likely to see collaborative AI–​human hacks. An AI could identify an exploitable vulnerability that would potentially be a hack, and then an experienced accountant or tax attorney would use their experience and judgment to figure out if that vulnerability could be profitably exploited.

For almost all of history, hacking has exclusively been a human activity. Searching for new hacks requires expertise, time, creativity, and luck. When AIs start hacking, that will change. AIs won’t be constrained in the same ways or have the same limits as people. They won’t need to sleep. They’ll think like aliens. And they’ll hack systems in ways we can’t anticipate.

Computers have accelerated hacking across four dimensions: speed, scale, scope, and sophistication. AI will exacerbate these trends even more.

First, speed: The human process of hacking, which sometimes takes months or years, could become compressed to days, hours, or even seconds. What might happen when you feed an AI the entire US tax code and command it to figure out all of the ways one can minimize one’s tax liability? Or, in the case of a multinational corporation, analyze and optimize the entire planet’s tax codes? Could an AI figure out, without being prompted, that it’s smart to incorporate in Delaware and register a ship in Panama? How many vulnerabilities—​loopholes—​will it find that we don’t already know about? Dozens? Hundreds? Thousands? We have no idea, but we’ll probably find out within the next decade.

Next, scale: Once AI systems begin to discover hacks, they’ll be capable of exploiting them at a scale for which we’re simply not prepared. So when AIs begin to crunch financial systems, they will come to dominate that space. Already our credit markets, tax codes, and laws in general are biased toward the wealthy. AI will accelerate that inequity. The first AIs to hack finance in pursuit of profit won’t be developed by equality-​minded researchers; they’ll be developed by global banks and hedge funds and management consultants.

Now, scope: We have societal systems that deal with hacks, but those were developed when hackers were humans, and hacks unfolded at a human pace. We have no system of governance that could quickly and efficiently adjudicate an onslaught of hundreds—​let alone thousands—​of newly discovered tax loopholes. We simply can’t patch the tax code that quickly. We haven’t been able to prevent humans’ use of Facebook to hack democracy; it’s a challenge to imagine what could happen when an AI does it. If AIs begin to figure out unanticipated but legal hacks of financial systems, then take the world’s economy for a wild ride, recovery will be long and painful.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

And finally—​sophistication: AI-​assisted hacks open the door to complex strategies beyond those that can be devised by the unaided human mind. The sophisticated statistical analyses of AIs can reveal relationships between variables, and thus possible exploits, that the best strategists and experts might never have recognized. That sophistication may allow AIs to deploy strategies that subvert multiple levels of the target system. For example, an AI designed to maximize a political party’s vote share may determine a precise combination of economic variables, campaign messages, and procedural voting tweaks that could make the difference between election victory and defeat, extending the revolution that mapping software brought to gerrymandering into all aspects of democracy. And that’s not even getting into the hard-​to-​detect tricks an AI could suggest for manipulating the stock market, legislative systems, or public opinion.

At computer speed, scale, scope, and sophistication, hacking will become a problem that we as a society can no longer manage.

I’m reminded of a scene in the movie Terminator, in which Kyle Reese describes to Sarah Connor the cyborg that is hunting her: “It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever .… ” We’re not dealing with literal cyborg assassins, but as AI becomes our adversary in the world of social hacking, we might find it just as hard to keep up with its inhuman ability to hunt for our vulnerabilities.

Some AI researchers do worry about the extent to which powerful AIs might overcome their human-​imposed constraints and—​potentially—​come to dominate society. Although this may seem like wild speculation, it’s a scenario worth at least passing consideration and prevention.

Today and in the near future, though, the hacking described in this book will be perpetrated by the powerful against the rest of us. All of the AIs out there, whether on your laptop, online, or embodied in a robot, are programmed by other people, usually in their interests and not yours. Although an internet-​connected device like Alexa can mimic being your trusted friend, never forget that it is designed to sell Amazon’s products. And just as Amazon’s website nudges you to buy its house brands instead of competitors’ higher-​quality goods, it won’t always be acting in your best interest. It will hack your trust in Amazon for the goals of its shareholders.

In the absence of any meaningful regulation, there really isn’t anything we can do to prevent AI hacking from unfolding. We need to accept that it is inevitable, and build robust governing structures that can quickly and effectively respond by normalizing beneficial hacks into the system and neutralizing the malicious or inadvertently damaging ones.

This challenge raises deeper, harder questions than how AI will evolve or how institutions can respond to it: What hacks count as beneficial? Which are damaging? And who decides? If you think government should be small enough to drown in a bathtub, then you probably think hacks that reduce government’s ability to control its citizens are usually good. But you still might not want to substitute technological overlords for political ones. If you believe in the precautionary principle, you want as many experts testing and judging hacks as possible before they’re incorporated into our social systems. And you might want to apply that principle further upstream, to the institutions and structures that make those hacks possible.

The questions continue. Should AI-​created hacks be governed locally or globally? By administrators or by referendum? Or is there some way we can let the market or civil society groups decide? (The current efforts to apply governance models to algorithms are an early indicator of how this will go.) The governing structures we design will grant some people and organizations power to determine the hacks that will shape the future. We’ll need to make sure that that power is exercised wisely.


Excerpted from A Hacker's Mind: How the Powerful Bend Society’s Rules, and How to Bend them Back by Bruce Schneier. Copyright © 2023 by Bruce Schneier. Used with permission of the publisher, W.W. Norton & Company, Inc. All rights reserved.

Related Articles

Latest Articles