In July, Meta touted its efforts to clamp down on hate speech on Facebook ahead of Kenya’s August 9 election. It spoke too soon. The company continued to permit ads encouraging ethnic violence in the country, according to a new report—and now Meta’s platforms face a possible suspension.
In the report, researchers from the activist group Global Witness and the British law firm Foxglove Legal attempted to buy ads that included hate speech and calls for violence, including genocide, in both Swahili and English. Meta’s ad systems eventually approved all of them.
“It is very clear that Facebook is in violation of the laws of our country,” Danvas Makori, the commissioner of Kenya’s National Cohesion and Integration Commission (NCIC), said in a press conference following the publication of the Global Witness report. “They have allowed themselves to be a vector of hate speech and incitement, misinformation, and disinformation.” The NCIC said Meta would have a week to comply with the country’s hate speech regulations, or be suspended. (The NCIC and the Communications Authority did not respond to requests for comment by the time of publication).
But shutting down the platform, or even the mere threat of doing so, could have long-term consequences, says Odanga Madung, a Kenyan journalist and Mozilla fellow who has researched disinformation and hate speech on social platforms. “We have been saying for years that if the platforms do not clean up their act, their models of doing business won’t be sustainable,” says Madung. Leaving up hate speech and other content that may violate local laws provides governments an easy justification to ban social platforms altogether. “In authoritarian governments, or governments with authoritarian streaks, they are looking for convenient reasons to get rid of platforms.”
Kenya formed the NCIC in 2008 to ensure peaceful elections, after the results of the country’s 2007 presidential elections led to widespread violence and the displacement of some 600,000 people. Earlier this year, the commission warned that hate speech on social platforms had increased 20 percent in 2022, citing the “misuse of social media platforms to perpetuate ethnic hate speech and incitement to violence.” Experts have warned that this year’s elections are also at risk of becoming violent.
Meta’s most recent statement outlining its approach to the Kenyan elections said it had removed 37,000 pieces of hate-speech content and 42,000 pieces of content that violated the company’s “violence and incitement policy” over six months leading up to April 30.
Most PopularThe End of Airbnb in New YorkBusiness
But Cori Crider, cofounder of Foxglove Legal, says that this transparency does not provide enough context to truly evaluate how effective Meta’s operations to curb hate speech in Kenya have been.
“You don't know what the denominator is, right? You have no idea how much content out there is missing, or how long any of those pieces of content stayed up,” she says. “Because if a piece of hateful content is up there long enough to go viral, as happened in the Ethiopian context, the damage is, to a very significant extent, done.”
In June, Global Witness and Foxglove found that Meta continued to approve ads in Amharic targeting Ethiopian users that included hate speech and calls for violence. Facebook has been implicated in spreading hate speech and stoking ethnic violence in Ethiopia’s ongoing conflict.
Crider argues that Facebook needs to invest more in its moderation practices and protections for democracy. She worries that even the threat of a ban allows the company to deflect accountability for the problems it has left unaddressed.
“I think ultimately the moment that any regulator looks at Facebook and looks as if they're going to make them actually do something that might cost them some money, they start howling about censorship and present a false choice that it's either an essentially unmoderated and unregulated Facebook or no Facebook at all,” she says.
And Crider says there are things the company can do, including “break the glass” measures like deprioritizing its heavily promoted live videos or limiting the reach of inflammatory content, and banning election-related ads in the run up to the vote.
Mercy Ndegwa, Meta’s director of public policy for East Africa and the Horn of Africa, told WIRED that the company has "taken extensive steps to help us catch hate speech and inflammatory content in Kenya, and we’re intensifying these efforts ahead of the election.” She acknowledged, however, that “despite these efforts, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes.” Meta did not answer specific questions about the number of content moderators it has who speak Swahili or other Kenyan languages, or the nature of its conversations with the Kenyan government.
“What the researchers did was stress-test Facebook’s systems and proved that what the company was saying was hogwash,” says Madung. The fact that Meta allowed ads on the platform despite a review process “raises questions about their ability to handle other forms of hate speech,” says Madung, including the vast amount of user-generated content that does not require preapproval.
But banning Meta’s platforms, says Madung, will not get rid of disinformation or ethnic tensions, because it does not address the root cause. “This is not a mutually exclusive question,” he says. “We need to find a middle ground between heavy-handed approaches and real platform accountability.”
On Saturday, Joseph Mucheru, cabinet secretary for internet and communications technologies (ICT), tweeted, “Media, including social media, will continue to enjoy PRESS FREEDOM in Kenya. Not clear what legal framework NCIC plans to use to suspend Facebook. Govt is on record. We are NOT shutting down the Internet.” There’s currently no legal framework that would allow NCIC to order Facebook’s suspension, concurs Bridget Andere, Africa policy analyst at digital-rights nonprofit Access Now.
“Platforms like Meta have failed completely in their handling of misinformation, disinformation, and hate speech in Tigray and Myanmar,” said Andere. “The danger is that governments will use that as an excuse for internet shutdowns and app blocking, when it should instead spur companies toward greater investment in human content moderation, and doing so in an ethical and human-rights-respecting manner.”
Madung, likewise, worries that regardless of whether the government chooses to suspend Facebook and Instagram now, the damage may already be done. “The effects will be seen at a different time,” he says. “The issue is, the precedent is now officially out there, and it could be referred to at any point in time.”