21.1 C
New York
Friday, July 26, 2024

This Chatbot Aims to Steer People Away From Child Abuse Material

There are huge volumes of child sexual abuse photos and videos online—millions of pieces are removed from the web every year. These illegal images are often found on social media websites, image hosting services, dark web forums, and legal pornography websites. Now a new tool on one of the biggest pornography websites is trying to interrupt people as they search for child sexual abuse material and redirect them to a service where they can get help.

Since March this year, each time someone has searched for a word or phrase that could be related to child sexual abuse material (also known as CSAM) on Pornhub’s UK website, a chatbot has appeared and interrupted their attempted search, asking them whether they want to get help with the behavior they’re showing. During the first 30 days of the system’s trial, users triggered the chatbot 173,904 times.

“The scale of the problem is so huge that we really need to try and prevent it happening in the first place,” says Susie Hargreaves, the chief executive of the Internet Watch Foundation (IWF), a UK-based nonprofit that removes child sexual abuse content from the web. The IWF is one of two organizations that developed the chatbot being used on Pornhub. “We want the results to be that people don’t look for child sexual abuse. They stop and check their own behavior,” Hargreaves says.

The chatbot appears when someone searches Pornhub for any of 28,000 terms it has identified that it believes could have links to people looking for CSAM. And searches can include millions of potential keyword combinations. The popup, which has been designed by anti-child abuse charity the Lucy Faithfull Foundation alongside the IWF, will then ask people a series of questions and explain that what they are searching for may be illegal. The chatbot tells people it is run by the Lucy Faithfull Foundation and says it offers “confidential, nonjudgmental” support. People who click a prompt saying they would like help are offered details of the organization’s website, telephone help line, and email service.

“We realized this needs to be as simple a user journey as possible,” says Dan Sexton, the chief technology officer at the IWF. Sexton explains the chatbot has been in development for more than 18 months and involved multiple different groups as it was designed. The aim is to “divert” or “disrupt” someone who may be looking for child sexual abuse material and to do so using just a few clicks.

The key to the system’s success is at the heart of its premise: Does this kind of behavioral nudge stop people from looking for CSAM? The results can be difficult to measure, say those involved with the chatbot project. If someone closes their browser after seeing the chatbot, that could be considered a success, for example, but it is impossible to know what they did next.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

A behavioral nudge isn’t unprecedented in efforts to reduce online harms though, and there is some data by which the results can be measured. It is possible to see how much time people interact with the chatbot and the number of clicks people make to get help. Before Pornhub started trialing the chatbot, it was already pointing people toward the Lucy Faithfull Foundation website using a static page when they searched for any of its 28,000 terms.

Using the chatbot is more direct and maybe more engaging, says Donald Findlater, the director of the Stop It Now help line run by the Lucy Faithfull Foundation. After the chatbot appeared more than 170,000 times in March, 158 people clicked through to the help line’s website. While the number is “modest,” Findlater says, those people have made an important step. “They’ve overcome quite a lot of hurdles to do that,” Findlater says. “Anything that stops people just starting the journey is a measure of success,” the IWF’s Hargreaves adds. “We know that people are using it. We know they are making referrals, we know they’re accessing services.”

Pornhub has a checkered reputation for the moderation of videos on its website, and reports have detailed how women and girls had videos of themselves uploaded without their consent. In December 2020, Pornhub removed more than 10 million videos from its website and started requiring people uploading content to verify their identity. Last year, 9,000 pieces of CSAM were removed from Pornhub.

“The IWF chatbot is yet another layer of protection to ensure users are educated that they will not find such illegal material on our platform, and referring them to Stop It Now to help change their behavior,” a spokesperson for Pornhub says, adding it has “zero tolerance” for illegal material and has clear policies around CSAM. Those involved in the chatbot project say Pornhub volunteered to take part, isn’t being paid to do so, and that the system will run on Pornhub’s UK website for the next year before being evaluated by external academics.

John Perrino, a policy analyst at the Stanford Internet Observatory who is not connected to the project, says there has been an increase in recent years to build new tools that use “safety by design” to combat harms online. “It’s an interesting collaboration, in a line of policy and public perception, to help users and point them toward healthy resources and healthy habits,” Perrino says. He adds that he has not seen a tool exactly like this being developed for a pornography website before.

There is already some evidence that this kind of technical intervention can make a difference in diverting people away from potential child sexual abuse material and reduce the number of searches for CSAM online. For instance, as far back as 2013, Google worked with the Lucy Faithfull Foundation to introduce warning messages when people search for terms that could be linked to CSAM. There was a “thirteen-fold reduction” in the number of searches for child sexual abuse material as a result of the warnings, Google said in 2018.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

A separate study in 2015 found search engines that put in place blocking measures against terms linked to child sexual abuse saw the number of searches drastically decrease, compared to those that didn’t put measures in place. One set of advertisements designed to direct people looking for CSAM to help lines in Germany saw 240,000 website clicks and more than 20 million impressions over a three-year period. A 2021 study that looked at warning pop-up messages on gambling websites found the nudges had a “limited impact.”

Those involved with the chatbot stress that they don’t see it as the only way to stop people from finding child sexual abuse material online. “The solution is not a magic bullet that is going to stop the demand for child sexual abuse on the internet. It is deployed in a particular environment,” Sexton says. However, if the system is successful, he adds it could then be rolled out to other websites or online services.

“There are other places that they will also be looking, whether it’s on various social media sites, whether it’s on various gaming platforms,” Findlater says. However, if this was to happen, the triggers that cause it to pop up would have to be evaluated and the system rebuilt for the specific website that it is on. The search terms used by Pornhub, for instance, wouldn’t work on a Google search. “We can’t transfer one set of warnings to another context,” Findlater says.

Related Articles

Latest Articles