Another week, another privacy horror show: Crisis Text Line, a nonprofit text message service for people experiencing serious mental health crises, has been using "anonymized" conversation data to power a for-profit machine learning tool for customer support teams. (After backlash, CTL announced it would stop.) Crisis Text Line’s response to the backlash focused on the data itself and whether it included personally identifiable information. But that response uses data as a distraction. Imagine this: Say you texted Crisis Text Line and got back a message that said “Hey, just so you know, we'll use this conversation to help our for-profit subsidiary build a tool for companies who do customer support.” Would you keep texting?
That’s the real travesty—when the price of obtaining mental health help in a crisis is becoming grist for the profit mill. And it’s not just users of CTL who pay; it’s everyone who goes looking for help when they need it most.
Americans need help and can't get it. The huge unmet demand for critical advice and help has given rise to a new class of organizations and software tools that exist in a regulatory gray area. They help people with bankruptcy or evictions, but they aren't lawyers; they help people with mental health crises, but they aren't care providers. They invite ordinary people to rely on them and often do provide real help. But these services can also avoid taking responsibility for their advice, or even abuse the trust people have put in them. They can make mistakes, push predatory advertising and disinformation, or just outright sell data. And the consumer safeguards that would normally protect people from malfeasance or mistakes by lawyers or doctors haven't caught up.
This regulatory gray area can also constrain organizations that have novel solutions to offer. Take Upsolve, a nonprofit that develops software to guide people through bankruptcy. (The organization takes pains to claim it does not offer legal advice.) Upsolve wants to train New York community leaders to help others navigate the city's notorious debt courts. One problem: These would-be trainees aren't lawyers, so under New York (and nearly every other state) law, Upsolve's initiative would be illegal. Upsolve is now suing to carve out an exception for itself. The company claims, quite rightly, that a lack of legal help means people effectively lack rights under the law.
The legal profession's failure to grant Americans access to support is well-documented. But Upsolve's lawsuit also raises new, important questions. Who is ultimately responsible for the advice given under a program like this, and who is responsible for a mistake—a trainee, a trainer, both? How do we teach people about their rights as a client of this service, and how to seek recourse? These are eminently answerable questions. There are lots of policy tools for creating relationships with elevated responsibilities: We could assign advice-givers a special legal status, establish a duty of loyalty for organizations that handle sensitive data, or create policy sandboxes to test and learn from new models for delivering advice.
But instead of using these tools, most regulators seem content to bury their heads in the sand. Officially, you can’t give legal advice or health advice without a professional credential. Unofficially, people can get such advice in all but name from tools and organizations operating in the margins. And while credentials can be important, regulators are failing to engage with the ways software has fundamentally changed how we give advice and care for one another, and what that means for the responsibilities of advice-givers.
And we need that engagement more than ever. People who seek help from experts or caregivers are vulnerable. They may not be able to distinguish a good service from a bad one. They don't have time to parse terms of service dense with jargon, caveats, and disclaimers. And they have little to no negotiating power to set better terms, especially when they're reaching out mid-crisis. That's why the fiduciary duties that lawyers and doctors have are so necessary in the first place: not just to protect a person seeking help once, but to give people confidence that they can seek help from experts for the most critical, sensitive issues they face. In other words, a lawyer's duty to their client isn't just to protect that client from that particular lawyer; it's to protect society's trust in lawyers.
And that’s the true harm—when people won't contact a suicide hotline because they don't trust that the hotline has their sole interest at heart. That distrust can be contagious: Crisis Text Line's actions might not just stop people from using Crisis Text Line. It might stop people from using any similar service. What's worse than not being able to find help? Not being able to trust it.
Most PopularThe End of Airbnb in New YorkBusiness
To fix this, we need to rethink advice and care for a digital world and design consumer protections that preserve trust in such help. First, policymakers need to move beyond current definitions of advice and caregiving that are based solely on professional status. Any organization that invites vulnerable people to rely on them to navigate a critical life issue needs to have fiduciary responsibilities: They need to be loyal to their client’s interests, and they have an elevated responsibility to ensure the help they provide is trustworthy. To put it simply: Mistakes should cost the advisor, not the client. Second, we need to gap-fill the gray area. Most professional fiduciaries—lawyers, doctors—are regulated by a patchwork of state authorities. Where those are slow to move, a newly assertive Federal Trade Commission can help set best practices for these not-quite-advice-giving organizations. Finally, more work needs to be done to translate the professional duties of loyalty and care to the world of software-mediated advice. For example, an advisor might have a duty to alert users about software errors that may have affected the advice they received, or to create interfaces that help users proactively identify possible mistakes.
Beyond policymaking and regulation, we simply need a better common language for understanding emerging advice and care relationships. People should understand what they're getting when they use legal help software or text a crisis hotline, without having to become experts themselves. It’s time to throw out the traditional tech playbook of flashy marketing copy, legal disclaimers, and slot machine-like user interfaces.
In a crisis, people deserve a helping hand. They shouldn't have to decide whether that hand is worth taking.
Updated 2/2/2022 10:30 ET: This article has been updated to reflect that Crisis Text Line's for-profit spinoff shared data with a company that—although it builds conversational AI tools for customer service—has not directly used Crisis Text Line's data in their machine learning systems.
More Great WIRED Stories📩 The latest on tech, science, and more: Get our newsletters!How Bloghouse's neon reign united the internetThe US inches toward building EV batteries at homeThis 22-year-old builds chips in his parents' garageThe best starting words to win at WordleNorth Korean hackers stole $400M in crypto last year👁️ Explore AI like never before with our new database🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones