10.1 C
New York
Saturday, April 13, 2024

Chatbots Got Big—and Their Ethical Red Flags Got Bigger

In the weeks following the release of OpenAI’s viral chatbot ChatGPT late last year, Google AI chief Jeff Dean expressed concern that deploying a conversational search engine too quickly might pose a reputational risk for Alphabet. But last week Google announced its own chatbot, Bard, which in its first demo made a factual error about the James Webb Space Telescope.

Also last week, Microsoft integrated ChatGPT-based technology into Bing search results. Sarah Bird, Microsoft’s head of responsible AI, acknowledged that the bot could still “hallucinate” untrue information but said the technology had been made more reliable. In the days that followed, Bing claimed that running was invented in the 1700s and tried to convince one user that the year is 2022.

Alex Hanna sees a familiar pattern in these events—financial incentives to rapidly commercialize AI outweighing concerns about safety or ethics. There isn’t much money in responsibility or safety, but there’s plenty in overhyping the technology, says Hanna, who previously worked on Google’s Ethical AI team and is now head of research at nonprofit Distributed AI Research.

The race to make large language models—AI systems trained on massive amounts of data from the web to work with text—and the movement to make ethics a core part of the AI design process began around the same time. In 2018, Google launched the language model BERT, and before long Meta, Microsoft, and Nvidia had released similar projects based on the AI that is now part of Google search results. Also in 2018, Google adopted AI ethics principles said to limit future projects. Since then, researchers have warned that large language models carry heightened ethical risks and can spew or even intensify toxic, hateful speech. These models are also predisposed to making things up.

As startups and tech giants have attempted to build competitors to ChatGPT, some in the industry wonder whether the bot has shifted perceptions for when it’s acceptable or ethical to deploy AI powerful enough to generate realistic text and images.

OpenAI’s process for releasing models has changed in the past few years. Executives said the text generator GPT-2 was released in stages over months in 2019 due to fear of misuse and its impact on society (that strategy was criticized by some as a  publicity stunt). In 2020, the training process for its more powerful successor, GPT-3, was well documented in public, but less than two months later OpenAI began commercializing the technology through an API for developers. By November 2022, the ChatGPT release process included no technical paper or research publication, only a blog post, a demo, and soon a subscription plan.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

Irene Solaiman, policy director at open source AI startup Hugging Face, believes outside pressure can help hold AI systems like ChatGPT to account. She is working with people in academia and industry to create ways for nonexperts to perform tests on text and image generators to evaluate bias and other problems. If outsiders can probe AI systems, companies will no longer have an excuse to avoid testing for things like skewed outputs or climate impacts, says Solaiman, who previously worked at OpenAI on reducing the system’s toxicity

Each evaluation is a window into an AI model, Solaiman says, not a perfect readout of how it will always perform. But she hopes to make it possible to identify and stop harms that AI can cause because alarming cases have already arisen, including players of the game AI Dungeon using GPT-3 to generate text describing sex scenes involving children. “That’s an extreme case of what we can’t afford to let happen,” Solaiman says.

Solaiman’s latest research at Hugging Face found that major tech companies have taken an increasingly closed approach to the generative models they released from 2018 to 2022. That trend accelerated with Alphabet’s AI teams at Google and DeepMind, and more widely across companies working on AI after the staged release of GPT-2. Companies that guard their breakthroughs as trade secrets can also make the forefront of AI less accessible for marginalized researchers with few resources, Solaiman says.

As more money gets shoveled into large language models, closed releases are reversing the trend seen throughout the history of the field of natural language processing. Researchers have traditionally shared details about training data sets, parameter weights, and code to promote reproducibility of results.“We have increasingly little knowledge about what database systems were trained on or how they were evaluated, especially for the most powerful systems being released as products,” says Alex Tamkin, a Stanford University PhD student whose work focuses on large language models.

He credits people in the field of AI ethics with raising public consciousness about why it’s dangerous to move fast and break things when technology is deployed to billions of people. Without that work in recent years, things could be a lot worse.

In fall 2020, Tamkin co-led a symposium with OpenAI’s policy director, Miles Brundage, about the societal impact of large language models. The interdisciplinary group emphasized the need for industry leaders to set ethical standards and take steps like running bias evaluations before deployment and avoiding certain use cases.

Tamkin believes external AI auditing services need to grow alongside the companies building on AI because internal evaluations tend to fall short. He believes participatory methods of evaluation that include community members and other stakeholders have great potential to increase democratic participation in the creation of AI models.

Merve Hickok, who is a research director at an AI ethics and policy center at the University of Michigan, says trying to get companies to put aside or puncture AI hype, regulate themselves, and adopt ethics principles isn’t enough. Protecting human rights means moving past conversations about what’s ethical and into conversations about what’s legal, she says.

Hickok and Hanna of DAIR are both watching the European Union finalize its AI Act this year to see how it treats models that generate text and imagery. Hickok said she’s especially interested in seeing how European lawmakers treat liability for harm involving models created by companies like Google, Microsoft, and OpenAI.

“Some things need to be mandated because we have seen over and over again that if not mandated, these companies continue to break things and continue to push for profit over rights, and profit over communities,” Hickok says.

While policy gets hashed out in Brussels, the stakes remain high. A day after the Bard demo mistake, a drop in Alphabet’s stock price shaved about $100 billion in market cap. “It’s the first time I’ve seen this destruction of wealth because of a large language model error on that scale,” says Hanna. She is not optimistic this will convince the company to slow its rush to launch, however. “My guess is that it’s not really going to be a cautionary tale.”

Updated 2-16-2023, 12.15 pm EST: A previous version of this article misspelled Merve Hickok's name.

Related Articles

Latest Articles