28.6 C
New York
Thursday, July 4, 2024

Europe Is in Danger of Using the Wrong Definition of AI

What does it mean to be artificially intelligent? More than an endless parlor game for amateur philosophers, this debate is central to the forthcoming Artificial Intelligence Regulation for the 447 million citizens of the European Union. The AI Reg—better known as the AI Act, or AIA—will determine what AI can be deployed in the EU, and how much it costs an organization to do so.

The AIA, of course, is not only a context for this conversation but an artifact of a much larger moment. Vast quantities of data are being gathered not only on us as individuals but also on every component of our societies. Humanity, or at least parts of it, is building understanding about populations, demographies, personalities, and polities. With that understanding comes a capacity to predict and a capacity to manipulate. The regulation and exploitation of transnational technology and commerce has become a concern of international relations and a vector of international conflict. AI is both a source of wealth and a means to translate that wealth more effectively into influence and power.

The present version of the AIA is a draft written by the European Commission published in 2021. Now the elected European Parliament is seeking commentary as it proposes a revised and hopefully improved version. The final version will have enormous influence not only in the EU but on lives and corporations globally, even for people and companies who never visit or do business with the EU.

As with the previous General Data Privacy Regulation (GDPR), the EU is seeking through the AIA first and foremost to expand our digital economy. The GDPR may seem like little more than a bad joke involving too many popups. (In fact, many of the popups are illegally constructed to be overly annoying in an effort to discredit the regulation; Facebook was just recently forced to make theirs clearer.) But the GDPR essentially provides a single legal interface to the entire EU digital market. This has been an economic boon, not only to Big Tech but to startups and the digital economies of many European member states. Even smaller countries like Austria and Greece have seen their digital economies rocket in the past decade, putting to rest the idea that the GDPR—which spent 7 years in very public development before coming into force in 2018—would destroy European ecommerce.

Making the EU a powerful, attractive, and competitive market requires making it as easy for a company to operate in all 27 member countries as it would be in any single one. This requires “harmonizing” laws, as well as markets, across the entire bloc, and this requires ensuring the protections each nation demands for its citizens.

What the EU set out to address with the AIA is this: When and how can we allow artifacts to take actions? What system of development and of operations should be in place to ensure that AI systems are safe, effective, and socially beneficial? How can we ensure that those who develop and operate intelligent systems make them trustworthy and equitable?

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

To answer these questions, we must understand what, exactly, the EU is trying to regulate. What is artificial intelligence? A recently popular joke says that AI is neither intelligent nor artificial. It’s true that AI is “real” and present, but nothing matters more in terms of regulation than its artificiality— its constructedness. And here’s where we’re all in danger of the European Parliament being misled.

The current draft of the AIA starts with a very broad definition of AI: “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” This may already be too narrow a definition of AI, given what the AIA is primarily about—ensuring that decisions taken by or with machines are done so well, transparently, and with clear human accountability, at least if those decisions matter.

Unfortunately, about half of the EU member states are pushing back against this broad definition. In a new “presidency compromise” draft, these states propose a narrower definition of artificial intelligence, with two additional requirements: that the system be capable of “learning, reasoning, or modeling implemented with the techniques and approaches” listed in an annex, and that it is also a “generative system,” directly influencing its environment.

This definition is called a “compromise,” but it is actually compromised. Particularly given that generative models are a special subclass of machine learning, this definition makes it way too easy to argue at length in court for excluding a system from consideration by the act, and thus from the oversight and protection the AIA is intended to guarantee.

Who would want such changes? Well, Google, for one. Google presently wants you to think it has only been using AI for a few years, rather than remembering that the company’s core product, search, has been defined as the heart of AI since the 1970s. In 2020, I sat on a panel with Daniel Schoenberger, who works on legal issues for Google and who strongly supported regulating only the most dangerous AI. Schoenberger described that as being any AI based on a novel machine-learning algorithm less than 24 months old, which he then revised to 18 months. I’ve also just this month been told by a very senior German civil servant that we should only be particularly worried about “self-learning” systems, because they are harder to predict due to their “self-optimizing” nature. Therefore, all regulatory and enforcement resources should be thrown at them.

The call to restrict the definition of AI to only “complex” machine learning or other reasoning “ordinarily understood” to be intelligent is a problem. It produces not one but two ways people building or operating AI could avoid the kind of oversight the AIA was designed to provide, which every market needs for any potentially hazardous processes and products. The AIA is supposed to incentivize clarity and good governance, but a more complex definition of AI incentivizes either complexity or no governance. Either way potentially avoids oversight.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

A company could choose the most obscure, nontransparent systems architecture available, claiming (rightly, under this bad definition) that it was “more AI,” in order to access the prestige, investment, and government support that claim entails. For example, one giant deep neural network could be given the task not only of learning language but also of debiasing that language on several criteria, say, race, gender, and socio-economic class. Then maybe the company could also sneak in a little slant to make it also point toward preferred advertisers or political party. This would be called AI under either system, so it would certainly fall into the remit of the AIA. But would anyone really be reliably able to tell what was going on with this system? Under the original AIA definition, some simpler way to get the job done would be equally considered “AI,” and so there would not be these same incentives to use intentionally complicated systems.

Of course, under the new definition, a company could also switch to using more traditional AI, like rule-based systems or decision trees (or just conventional software). And then it would be free to do whatever it wanted—this is no longer AI, and there’s no longer a special regulation to check how the system was developed or where it’s applied. Programmers can code up bad, corrupt instructions that deliberately or just negligently harm individuals or populations. Under the new presidency draft, this system would no longer get the extra oversight and accountability procedures it would under the original AIA draft. Incidentally, this route also avoids tangling with the extra law enforcement resources the AIA mandates member states fund in order to enforce its new requirements.

Limiting where the AIA applies by complicating and constraining the definition of AI is presumably an attempt to reduce the costs of its protections for both businesses and governments. Of course, we do want to minimize the costs of any regulation or governance—public and private resources both are precious. But the AIA already does that, and does it in a better, safer way. As originally proposed, the AIA already only applies to systems we really need to worry about, which is as it should be.

In the AIA’s original form, the vast majority of AI—like that in computer games, vacuum cleaners, or standard smart phone apps—is left for ordinary product law and would not receive any new regulatory burden at all. Or it would require only basic transparency obligations; for example, a chatbot should identify that it is AI, not an interface to a real human.

The most important part of the AIA is where it describes what sorts of systems are potentially hazardous to automate. It then regulates only these. Both drafts of the AIA say that there are a small number of contexts in which no AI system should ever operate—for example, identifying individuals in public spaces from their biometric data, creating social credit scores for governments, or producing toys that encourage dangerous behavior or self harm. These are all simply banned, more or less. There are far more application areas for which using AI requires government and other human oversight: situations affecting human-life-altering outcomes, such as deciding who gets what government services, or who gets into which school or is awarded what loan. In these contexts, European residents would be provided with certain rights, and their governments with certain obligations, to ensure that the artifacts have been built and are functioning correctly and justly.

Making the AIA Act not apply to some of the systems we need to worry about—as the “presidency compromise” draft could do—would leave the door open for corruption and negligence. It also would make legal things the European Commission was trying to protect us from, like social credit systems and generalized facial recognition in public spaces, as long as a company could claim its system wasn’t “real” AI.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

When is something really intelligent? This of course depends on how you define the term, or, actually, which of the several existing definitions would be most useful to deploy in whatever conversation you are having now.

For the purposes of digital governance instruments like the AIA, it makes more sense to use a well-established definition of intelligence, dating back to scientists’ first explorations of the evolutionary origins of the human trait by looking at other species: the capacity to act effectively in response to changing contexts. Rocks aren’t intelligent at all, plants are a little intelligent, bees more so, monkeys more so again. Intelligence by this definition is evidently a computational process: the conversion of information about the world into some action. This definition is generally useful because it reminds (or explains to) people that intelligence is not some supernatural property, or just “human-likeness,” but rather a physical process we find throughout nature to varying degrees. It reminds us that AI requires physical as well as legal infrastructure. AI needs computers and communications at least as much as it needs data and algorithms. AI requires power and materials and produces pollution and waste.

Definitions aside, the rest of the AIA makes clear that what legislators care about with respect to AI are its outcomes. The details of how artifacts perform these tasks are only important to the extent that they determine how hard it is to provide transparency and maintain accountability. In this context, “the computing of appropriate action from context” is the best definition of AI for the AIA because it doesn’t get bogged down in technical details but instead allows the focus to remain on consequences.

The AIA—and indeed all product regulation, particularly for complex systems—should motivate using the simplest strategy that gets the job done well, because that increases the chances that the systems work and are maintainable. This is actually good for business as well as for society. “Maximizing profits” is easier if you can keep delivering your product reliably, safely, and for a long time.

Excluding some digital systems from consideration under the AIA just because they aren’t complicated enough might seem like a way to encourage simplifying AI, but in fact machine learning may sometimes be the clearest, most simple, and most elegant way to get a project done well. Other times it won’t be. Why add to the problem of getting a product released by shifting regulatory burden radically if and when you make a simple change in algorithmic approach?

And of course, this isn’t just about the AIA. The EU has already set aside large investments for AI, and society is already hyped for it. If the best systems for achieving an outcome suddenly aren’t considered AI, even though they use procedures that have been documented in AI textbooks for decades, then that might push companies and governments to use the second-best system. Or third.

Think of some service that’s presently provided by a government using only people and paper. When we write down in computer code or train up using machine learning the ways we intend our government employees to behave, we create both challenges and opportunities.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

The challenge is that a single mistake made in development may be repeated millions of times by automation without further thought. This is what happened with the British Post Office system. Twenty years ago, Fujitsu wrote new software for the British Post Office; immediately, bugs were reported. But those reports were ignored, not least because the British had passed a law saying that software is reliable. Therefore, the software accounts were believed and the post office workers were not. Even if they had years of good service, post office workers were forced to privately make up enormous “financial discrepancies.” Lives were ruined, families were bankrupted, people were jailed, deaths, including suicides, occurred. Twenty years later the case of these workers is only now being heard. This is why we need good oversight for any “high risk” digital system—the systems that change lives.

Yet the opportunity of digitizing government services is that we can have a wider-spread understanding and more open discussions of government procedures that were previously obscure. We can now, with minimal cost, make government and other processes like banking or real estate more transparent and more accessible. People can—if they choose to—get a better understanding of the world they live in. This could increase social trust as well as the quality of governance. Contrary to the claims of many, AI and the digital revolution could create a boon for transparency and human understanding—if we make the right decisions about regulation, and then enforce them.

If we choose the simple, broad definition of intelligence I’m advocating here, then we motivate people to use the clearest, most explainable and maintainable version of AI they can, or even just ordinary software. That benefits everyone—corporations and developers just as much as citizens, residents, and activists. We can still write fiction and philosophy about fantastic versions of intelligence that converge or diverge from human intelligence in interesting ways. But for writing laws, let’s be broad and inclusive enough to help us keep our devices and systems safe for our societies.


More Great WIRED Stories📩 The latest on tech, science, and more: Get our newsletters!Driving while baked? Inside the high-tech quest to find outYou (might) need a patent for that woolly mammothSony's AI drives a race car like a champHow to sell your old smartwatch or fitness trackerInside the lab where Intel tries to hack its own chips👁️ Explore AI like never before with our new database🏃🏽‍♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones

Related Articles

Latest Articles