Three years ago, the San Francisco Board of Supervisors made history by becoming the first city in the nation to ban use of facial recognition technology by local government. Last night, the board went in a different direction, giving police the right to kill a criminal suspect with a teleoperated robot if they believe there is an imminent threat of death to police or members of the public.
Assistant police chief David Lazar said ahead of the vote that killer robots might be needed in scenarios involving mass shootings or suicide bombers, citing the Mandalay Bay shooting in Las Vegas in 2017 and the killing of five police officers in Dallas, Texas, in 2016. Dallas police ultimately used explosives strapped to a Remotec F5A bomb disposal robot—a model also possessed by the San Francisco Police Department—to kill that suspect.
The new administrative code requires a police chief to authorize use of deadly force involving a robot and to first consider de-escalation or an alternative use of force. But some civil liberties groups, San Francisco residents, and experts on police violence fear allowing killer robots on city streets. They say the policy change normalizes militarized policing and could lead to the intimidation or death of vulnerable people historically discriminated against by law enforcement, such as those with mental health problems, homeless people, and communities of color.
The SFPD is still in the process of carrying out recommendations—including more than 100 related to use of force and bias—made by the US Department of Justice in 2016 after a series of shootings by police officers. This year, the California Racial and Identity Profiling Advisory Board, which aims to end racial disparities in policing, reported that SFPD searched Black people at a rate five times greater than that for white residents and were 13 times as likely to use force on Black residents compared to white residents.
Supervisor Hillary Ronen voted against killer robots at the meeting, saying that like many US parents she sometimes worries about school shootings but that the new policy opens a Pandora’s box where police using robots is the norm. “The tool begs to be used,” she says. “It might be used originally only occasionally, but over time people get less sensitive.”
Peter Asaro, an associate professor at The New School in New York who researches automation of police force, agrees. “Giving police this option means they’re going to use it when they should be looking for other options,” he said.
Most PopularThe End of Airbnb in New YorkBusiness
Asaro also believes that authorizing lethal police robots could be self-defeating. Suspects might be more suspicious of attempts to negotiate via a robot if they know it could be armed, he says. Asaro is cofounder of the International Committee for Robot Arms Control, a group working on an international treaty to ban killer robots.
San Francisco lawmakers were asked to sign off on police use of killer robots because of a process set in motion by a 2021 California law called Assembly Bill 481 that requires local oversight of the funding, acquisition, and use of military equipment by police. The law is intended to give local governments the power to guard against the militarization of law enforcement agencies and explicitly says that equipment is used more frequently in Black and Brown communities. In nearby Oakland, AB 481 led the city’s police department to request lethal use of force involving teleoperated robots, but in October police withdrew that request.
One effect of AB 481 is to add local oversight to hardware like the kind obtained through a US Department of Defense program that sends billions of dollars of military equipment such as armored vehicles and ammunition to local police departments. Equipment from the program was used against protesters in the wake of the police killings of Michael Brown in Ferguson, Missouri, in 2014 and George Floyd in Minneapolis in 2020.
Earlier this year, San Francisco supervisor Aaron Peskin amended San Francisco’s draft policy for military-grade police equipment to explicitly forbid use of robots to deploy force against any person. But an amendment proposed by SFPD this month argued that police needed to be free to use robotic force, because its officers must be ready to respond to incidents in which multiple people were killed. “In some cases, deadly force against a threat is the only option to mitigate those mass casualties,” the amendment said.
Ahead of yesterday’s vote, Brian Cox, director of the Integrity Unit at the San Francisco Public Defender’s Office, called the change antithetical to the progressive values the city has long stood for and urged supervisors to reject SFPD’s proposal. “This is a false choice, predicated on fearmongering and a desire to write their own rules,” he said in a letter to the board of supervisors.
Cox said lethal robots on SF streets could cause great harm, worsened by “SFPD’s long history of using excessive force—particularly against people of color.” The American Civil Liberties Union, the Electronic Frontier Foundation, and the Lawyers Committee for Civil Rights have also voiced opposition to the policy.
The San Francisco Police Department has disclosed that it has 17 robots, though only 12 are operational. They include search-and-rescue robots designed for use after a natural disaster like an earthquake, but also models that can be equipped with a shotgun, explosives, or pepper spray emitter.
Supervisor Aaron Peskin referred to the potential for police use of explosives to go wrong during the debate ahead of yesterday’s vote. During a 1985 standoff in Philadelphia, police dropped explosives from a helicopter on a house, causing a fire that killed 11 people and destroyed 61 homes.
Most PopularThe End of Airbnb in New YorkBusiness
Peskin called that one of the most atrocious and illegal incidents in the history of US law enforcement but said that the fact nothing similar has ever occurred in San Francisco gave him a measure of comfort. He ultimately voted to allow SFPD to use deadly robots. But he added the restriction that only the chief of police, assistant chief of operations, or deputy chief of special operations could authorize use of deadly force with a robot, along with language that urges consideration of de-escalation.
Granting approval to killer robots is the latest twist in a series of laws on policing technology from the tech hub that is San Francisco. After passing a law rejecting police use of Tasers in 2018, and providing oversight of surveillance technology and barring use of face recognition in 2019, city leaders in September gave police access to private security camera footage.
Supervisor Dean Preston referred to San Francisco’s inconsistent record on police technology in his dissent yesterday. “If police shouldn’t be trusted with Tasers, they sure as hell shouldn’t be trusted with killer robots,” he said. “We have a police force, not an army.”
San Francisco’s new policy comes at a time police access to robots is expanding, and those robots are becoming more capable. Most existing police robots move slowly on caterpillar tracks, but police forces in New York and Germany are beginning to use legged robots like the nimble quadruped Spot Mini.
Axon, manufacturer of the Taser, has proposed adding the weapon to drones to stop mass shootings. And in China, researchers are working on quadrupeds that work in tandem with tiny drones to chase down suspects.
Boston Dynamics, a pioneer of legged robots, and five other robotics manufacturers published an open letter in October objecting to the weaponization of their robots. Signatories said they felt a renewed sense of urgency to state their position due to “a small number of people who have visibly publicized their makeshift efforts to weaponize commercially available robots.” But as robotics becomes more advanced and cheaper, there are plenty of competitors without such reservations. Ghost Robotics, a Pennsylvania company in pilot projects with the US military and Department of Homeland Security on the US-Mexico border, allows customers to mount guns on its legged robots.