28.9 C
New York
Friday, May 24, 2024

Meta’s Biggest Encrypted Messaging Mistake Was Its Promise

Since the 1990s, governments around the world have often used the welfare of children as an excuse for all kinds of internet policy overreach: encryption backdoors, centralized censorship mechanisms, and anti-anonymity measures. So when Meta, facing pressure from the government as well as NGOs, announced its decision last week to delay the rollout of end-to-end encryption for messaging systems such as Instagram DMs and Messenger—with child safety as the cited reason—privacy advocates were understandably upset and suspicious. But speaking as someone who previously worked on safety and security at Facebook, I don’t view the delay as an arbitrary political decision. The concern over the safety of young users is genuine, and the problems are pervasive, especially when it comes to social systems as complex as those at Meta.

Frustrating as it may be, the company’s delay is likely justified. Some form of end-to-end encryption should be available to all people, to preserve the right to private communication and prevent government incursions. But end-to-end encryption isn't just one issue or technology—it’s a broad set of policy decisions and use cases with high-stakes consequences. As such, creating the proper environment for its use is a complex task. The need for end-to-end encryption, as well as the conditions required to implement it safely, vary for each platform, and apps like Facebook and Instagram still require serious changes before it can be introduced without compromising functionality or introducing safety risks. Meta’s greatest misstep isn’t this latest delay but rather the timeline, and perhaps even the outcome it promised.

When then-Facebook first announced its timeline to implement interoperable end-to-end encryption across all its properties in 2019, its immediate infeasibility was clear. The proposed timeline was so rapid that even producing the technology itself would be nigh impossible, with safety mechanisms barely entering the picture. Systems like WhatsApp already had end-to-end encryption and content-oblivious mechanisms for detecting some kinds of harm, and it was assumed this would readily translate to other Facebook properties.

However, apps and sites like Facebook and Instagram are wildly different in architecture and dynamics than WhatsApp. Both implement direct messaging alongside systems that attempt to actively connect you with people, derived from a combination of reading users' phone books, algorithmically determining similar accounts based on locations, interests, and friends, as well as general online activity. In the case of Facebook, large public or private groups also facilitate expansion of one's social graph, along with global search of all accounts and grouping by institutions such as schools. While apps like WhatsApp and Signal operate more like private direct messaging between known contacts, Facebook and Instagram’s growth-oriented design leads to situations where abusers can more easily find new victims, identities and relationships are accidentally exposed, and large numbers of strangers are mixed together.

These fundamental differences mean that before Meta can safely switch all of its platforms to end-to-end encryption, its apps must undergo some nontrivial changes. First off, the company must improve its existing content-oblivious harm-reduction mechanisms. This involves using social graphs to detect users who are trying to rapidly expand their networks or to target people of certain demographics (for example, people of a particular declared or inferred age), and finding other potentially problematic patterns in metadata. These mechanisms can work hand in hand with user reporting options and proactive messaging, such that users are presented with safety messaging that informs them of their options for reporting abuse, along with efficient reporting flows to allow them to escalate to the operator of the platform. While these types of features are beneficial with or without end-to-end encryption, they become significantly more important when the ability to inspect content is removed.

Meta must also limit recommendation engines and discoverability. “People You May Know,” or PYMK, has for years been understood as problematic within Facebook for its propensity to make inappropriate suggestions: recommending a therapist’s clients to each other, or potential targets to a possible abuser. What’s more, users no longer have the ability to prevent their account from surfacing via site search. One way to prevent such features from leading to unwanted and unsafe social connections would be to make it so that users with no social connection or those surfaced via search or PYMK can start chats without end-to-end encryption, and then have the option to mutually opt in. Restoring the ability to limit your exposure via search would also provide privacy benefits in and of itself.

Most PopularBusinessThe End of Airbnb in New York

Amanda Hoover

BusinessThis Is the True Scale of New York’s Airbnb Apocalypse

Amanda Hoover

CultureStarfield Will Be the Meme Game for Decades to Come

Will Bedingfield

GearThe 15 Best Electric Bikes for Every Kind of Ride

Adrienne So

While introducing the content-oblivious detection mechanisms mentioned earlier would help spot certain kinds of harm, detecting child sexual abuse material (CSAM) and nonconsensual intimate imagery (NCII) is fairly difficult without a system that has enough access to user activity to examine images and determine whether they include harmful or illegal content. There are proposals to implement these protections on the client side, so a user's device rather than a chat service is responsible for detecting the content. How to implement this without introducing new flaws or channels for government abuse is the subject of active debate, and it may ultimately require law enforcement to engage in more traditional undercover work instead of relying on companies to feed them tips. But for some channels, “end-to-end encryption lite” may be preferable to the status quo of unencrypted direct messaging.

In addition to the need for content policy and safety changes, the technical challenges of adding interoperable end-to-end encryption across multiple apps and websites would likely require changes in web browser design, making Meta’s timeline unlikely to have ever been met in the first place. Rather than a measure to improve user privacy or safety, this plan was most likely driven by the threat of regulatory action, as a way to tie disparate services together and make them difficult to separate back into distinct companies. (Meta acquired Instagram in 2012 and WhatsApp in 2014.) Regardless, while the company has made some progress with safety mechanisms since its initial announcement, significant hurdles still remain.

End-to-end encrypted communication can prevent significant harms and abuses by both governments and corporations, and everyone should have access to such tools in some form. But apart from safety concerns, it’s worth considering that the web is a fairly poor place to implement end-to-end encrypted communication at all. It's extremely difficult to ensure the integrity of dynamic web pages, or that a user is using the same code as the other parties in their conversations. Matters of managing encryption keys, availability of the requisite cryptographic tools, and syncing messages between browsers and devices are also not easily addressable with modern browser technology, and would require new web standards to implement.

Splitting Messenger into its own, completely separate app that’s untethered to Facebook, and phasing out chat within Facebook.com and Instagram.com itself would greatly simplify these problems. Indeed, it may be best to let apps such as WhatsApp and Signal be the default communication channel, rather than every website attempting to implement its own chat function. In a world where web platforms have complex social dynamics and implement many forms of person-to-person interactions, end-to-end encryption is not a one-size-fits-all approach, and it doesn’t always even make sense as conversations scale. Not every form of messaging can or should be individually encrypted to every member of its potential audience.

Though the calls for end-to-end encryption within all messaging tools are widespread and understandable, it’s important that platforms consider the significant trust and safety challenges that will inevitably arise—not just in terms of privacy and compliance with government requests for user data, but in terms of harassment, abuse, and exploitation. These considerations must be built into the product design from the start, and must be balanced with the risks of unencrypted communications. Meta would have been wise to acknowledge that sooner, rather than attempting to rush the job for the wrong reasons.

WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at opinion@wired.com.

More Great WIRED Stories📩 The latest on tech, science, and more: Get our newsletters!Can a digital reality be jacked directly into your brain?“AR is where the real metaverse is going to happen”The sneaky way TikTok connects you to real-life friendsAffordable automatic watches that feel luxeWhy can’t people teleport?👁️ Explore AI like never before with our new database🏃🏽‍♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones

Related Articles

Latest Articles