26.1 C
New York
Saturday, July 27, 2024

The Signal for Help I Created Went Viral. Now It Could Be Misused

In 2020, I helped create the Signal for Help, a hand signal that communicates to friends, family, and bystanders that “I need you to check in on me in a safe way.” Our team promoted the Signal for Help across social media, anticipating a pandemic-related rise in already high rates of gendered violence, and it went viral in November 2021 during a charged time of anxiety, stay-at-home directives, and the proliferation of video calling.

Cases of women and girls using the Signal for Help to get help in dangerous situations have made the news. For example, a woman used the Signal for Help during a traffic stop to get help with her abusive husband, and another woman used it to notify staff at a gas station that she was being held against her will by a violent ex-boyfriend. As a result, well-meaning people have been trying to integrate the Signal for Help with digital technology. A company with AI camera tools reached out to ask about building recognition of the Signal for Help into their security system, and there have been similar amateur attempts discussed on social media.

The appeal is clear: Automatic detection could be useful for a well-intentioned friend or coworker on the other side of a video call who might miss seeing someone using the Signal for Help. It’s admirable that people want to help those who may be in danger, but these new applications of technology misunderstand the purpose and use of the Signal for Help.

Such efforts are part of a growing trend of using AI to recognize distress: Experiments identifying distress in livestock like chickens, cattle, and pigs yield promising results because AI seems to disentangle a cacophony of animal shrieks, clucks, and grunts better than the naked ear.

But humans are not chickens or cattle. Intention to abuse and control can transform luddites into experts. In dangerous relationships, there’s always the question of who’s in charge of the tech.

The Signal for Help is an intentionally ephemeral tool, designed to help people communicate without uttering a word, and without leaving a digital trace. I’m being hurt … I can’t say it out loud … will you be there for me while I figure it out? Impermanence is an important feature, given the way abusers tend to control and manipulate. They lurk and stalk and monitor devices. Women’s shelters routinely help survivors deal with hacked smartphones, unwanted location tracking and voice recording apps, hidden cameras, and the like. Message boards, social media, and even word-of-mouth can help abusers violate the people they claim to love. In the case of Signal for Help, abusers might use the same AI mechanism designed for safety to alert them that the person they’re hurting is trying to use the Signal for Help.

And there are other problems with AI tools to detect distress in humans, which include software to scan student emails and web searches for self-harm and violence, as well as to identify student confusion, boredom, and distraction in virtual classrooms. On top of ethical and privacy concerns, their deployment hinges on the belief that we can reliably perceive someone in trouble, and act on it in a way that will truly help them. These tools operate on a positivist belief that when a human is in distress, they express it outwardly in predictable ways. And when they express it, they desire a specific kind of intervention.

But research shows that our assumption that human facial expressions align to emotions is not one we can wholeheartedly believe. Mismatches between body and emotion may be more pronounced in unhealthy relationships. People being abused speak of disassociation, of needing to “leave their bodies” to survive. Some refer to the lengths they take to obscure their offense, injury, and pain, that they have to do so to placate abusers and the bystanders who back them up. They talk about how conscious they are of every inflection and twitch, of how they chew, blink, and breathe, and that they get punished when they merely exist in a way that irritates their abusers.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

I’m not confident we could acquire the data to deal with these harrowing power imbalances. There are many examples of AI tools acting from our biases, discriminations, and misinterpretations, mirroring our lazy and skewed perspectives. Unhealthy relationships seem like an especially thorny data set.

And what’s the response when abuse-related distress is detected? Will an alert go to, say, emergency services, police, or a friend? What if that does more harm than good? The most dangerous period for a woman being abused arises when she takes action to leave her abusive partner. Taking action can trigger the abuser to intensify the violence, and it’s more likely to take a lethal turn. And those at high risk of gendered abuse, including racialized and Indigenous women, women with disabilities, and queer and trans people, face unique perils in their interactions with authorities.

When women and gender-diverse people can’t leave their abusers, when they don’t call the police or go to the hospital, they have their reasons. AI-triggered action is a blunt instrument at such sensitive junctures.

Tools for AI abuse detection and response can end up taking options away, but putting an end to experimentation is reactionary. The answer isn’t to stop looking for ways that AI can help us end abuse. But we need to be more conscious of its risks and pitfalls. Wise voices call for trauma-informed computing, the recognition that technology can create and worsen trauma, and an ongoing commitment to avoid it, and design grounded in the realities of equity-seeking people. Rather than clamor for technology to detect abuse for us, we might focus its potential elsewhere. Perhaps AI could help us build our confidence and competence in proactively responding to signals of abuse in the lives of those close to us, whether or not they use the Signal for Help itself. Perhaps it could enable us to practice a nonjudgmental stance and offer help to survivors in a spirit of care, countering our cultural tendency to ignore and stigmatize.

Will you be there for me? Maybe AI could help us learn how to become excellent supporters who know how to answer in each unique situation.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at opinion@wired.com.

Related Articles

Latest Articles