23.2 C
New York
Friday, July 12, 2024

Meta Ran a Giant Experiment in Governance. Now It’s Turning to AI

Late last month, Meta quietly announced the results of an ambitious, near-global deliberative “democratic” process to inform decisions around the company’s responsibility for the metaverse it is creating. This was not an ordinary corporate exercise. It involved over 6,000 people who were chosen to be demographically representative across 32 countries and 19 languages. The participants spent many hours in conversation in small online group sessions and got to hear from non-Meta experts about the issues under discussion. Eighty-two percent of the participants said that they would recommend this format as a way for the company to make decisions in the future.

Meta has now publicly committed to running a similar process for generative AI, a move that aligns with the huge burst of interest in democratic innovation for governing or guiding AI systems. In doing so, Meta joins Google, DeepMind, OpenAI, Anthropic, and other organizations that are starting to explore approaches based on the kind of deliberative democracy that I and others have been advocating for. (Disclosure: I am on the application advisory committee for the OpenAI Democratic inputs to AI grant.) Having seen the inside of Meta’s process, I am excited about this as a valuable proof of concept for transnational democratic governance. But for such a process to truly be democratic, participants would need greater power and agency, and the process itself would need to be more public and transparent.

I first got to know several of the employees responsible for setting up Meta’s Community Forums (as these processes came to be called) in the spring of 2019 during a more traditional external consultation with the company to determine its policy on “manipulated media.” I had been writing and speaking about the potential risks of what is now called generative AI and was asked (alongside other experts) to provide input on the kind of policies Meta should develop to address issues such as misinformation that could be exacerbated by the technology.

At around the same time, I first learned about representative deliberations—an approach to democratic decisionmaking that has taken off like wildfire, with increasingly high-profile citizen assemblies and deliberative polls all over the world. The basic idea is that governments bring difficult policy questions back to the public to decide. Instead of a referendum or elections, a representative microcosm of the public is selected via lottery. That group is brought together for days or even weeks (with compensation) to learn from experts, stakeholders, and each other before coming to a final set of recommendations.

Representative deliberations provided a potential solution to a dilemma I had been wrestling with for a long time: how to make decisions about technologies that impact people across national boundaries. I began advocating for companies to pilot these processes to help make decisions around their most difficult issues. When Meta independently kicked off such a pilot, I became an informal advisor to the company’s Governance Lab (which was leading the project) and then an embedded observer during the design and execution of its mammoth 32-country Community Forum process (I did not accept compensation for any of this time).

Above all, the Community Forum was exciting because it showed that running this kind of process is actually possible, despite the immense logistical hurdles. Meta’s partners at Stanford largely ran the proceedings, and I saw no evidence of Meta employees attempting to force a result. The company also followed through on its commitment to have those partners at Stanford directly report the results, no matter what they were. What’s more, it was clear that some thought was put into how best to implement the potential outputs of the forum. The results ended up including perspectives on what kinds of repercussions would be appropriate for the hosts of Metaverse spaces with repeated bullying and harassment and what kinds of moderation and monitoring systems should be implemented.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

Compared to the negativity dominating political discourse, the well-intentioned and forthright deliberations during Meta’s Community Forum were a breath of fresh air. However, the process was not without many significant flaws. Because participants had minimal agency over how they interacted with each other, and no direct interaction with the decisionmakers (Meta employees), the process often felt more like a data-gathering experiment than a democratic exercise. Moreover, while most participants appeared to understand the issues, and there was some meaningful deliberation, the extent and depth of that deliberation sometimes appeared insufficient for the questions at hand. Meta also has yet to fulfill its commitment to explain what actions it will take based on the results.

When Meta runs a similar process on generative AI, it should aim to correct the shortcomings of its first Community Forum and take cues from some of the best practice guidelines for similar processes run for governments. Given the rapid rate of AI developments, it will also be critical to have participants specify the conditions under which their recommendations would apply—and the conditions under which they would no longer be applicable.

Some might argue that the best approach to addressing issues of platforms or AI is to leave them all to existing democratic governments or to simply decentralize decisionmaking. But neither approach is sufficient. Autocratic and self-serving partisan governments have prevented or weaponized relevant regulation. National boundaries can make it very difficult to address challenges that cross borders. And open source or protocol-based decentralization offers limited ability to address issues like misinformation and harassment. Crypto-based systems that don’t use processes like representative deliberations face even more extreme forms of inequality, with major token holders wielding disproportionate power. We need ways for companies to make informed and democratic decisions—at least where some centralized non-state power may be in the public interest.

What would it take for something like a representative deliberation to truly achieve that ideal? A process aiming for a global mandate could counterintuitively have fewer total participants (say, 1000 people) across more countries, leading to greater resources per person and thus time for deeper deliberations. It might provide more agency for participants so they could directly suggest new proposals—something that careful application of AI can make possible at scale. Finally, the deliberations would need to be structured to ensure the influence, transparency, and gravitas appropriate for a democratic process. For example, the convening organization should commit upfront to not just releasing the results but also responding to them by a given date, and all sessions outside of small group discussions should be made public.

If we are to find safe passage between the Scylla of autocratic centralization and the Charybdis of ungovernable decentralization, we will need to continue refining our collective decisionmaking processes. They won’t be perfect the first time, or even the second—but if we are to survive in a world of rapidly accelerating AI advances, we will need to just as rapidly experiment and innovate in our approaches to transnational governance.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at ideas@wired.com.

Related Articles

Latest Articles