In late November, the San Francisco Board of Supervisors voted 8-3 to give the police the option to launch potentially lethal, remote-controlled robots in emergencies, creating an international outcry over law enforcement use of “killer robots.”
The San Francisco Police Department (SFPD), which was behind the proposal, said they would deploy robots equipped with explosive charges “to contact, incapacitate, or disorient violent, armed, or dangerous suspects” only when lives are at stake.
Missing from the mounds of media coverage is any mention of how digitally secure the lethal robots would be or whether an unpatched vulnerability or malicious threat actor could intervene in the digital machine’s functioning, no matter how skilled the robot operator, with tragic consequences.
Experts caution that robots are frequently insecure and subject to exploitation and, for those reasons alone, should not be used with the intent to harm human beings.
SFPD’s weaponised robot proposal under review
The law enforcement agency argued that the robots would only be used in extreme circumstances, and only a few high-ranking officers could authorise their use as a deadly force. SFPD also stressed that the robots would not be autonomous and would be operated remotely by officers trained to do just that.
The proposal came about after the SFPD struck language from a policy proposal related to the city’s use of its military-style weapons. The excised language, proposed by Board of Supervisors Rules Committee Chair Aaron Peskin, said, “Robots shall not be used as a use of force against any person.”
The removal of this language cleared the path for the SFPD to retrofit any of the department’s 17 robots to engage in lethal force actions.
Following public furor over the prospects of “murder” robots, the Board of Supervisors reversed itself a week later and voted 8-3 to prohibit police from using remote-controlled robots with lethal force. The supervisors separately sent the original lethal robot provision of the policy back to the Board’s Rules Committee for further review, which means it could be brought back again for future approval.
Robots inching toward lethal force
Military and law enforcement agencies have used robots for decades, starting as mechanical devices used for explosive ordnance disposal (EOD) or, more simply, bomb disposal.
In 2016, after the deaths of five police officers in Dallas during a rally for Alton Sterling and Philando Castile, the Dallas Police Department deployed a small robot designed to investigate and safely discharge explosives. They killed a sniper, Micah Xavier Johnson, using what was likely a 10-year-old robot while keeping the investigators safe, in the first known instance of an explosive-equipped robot disabling a suspect.
More recently, police departments have expanded applications for robotic technology, including Boston Dynamics' dog-like robot called Spot.
The Massachusetts State Police used Spot temporarily as a “mobile remote observation device” to provide troopers with images of suspicious devices or potentially hazardous locations that could be harbouring criminal suspects.
In October 2022, the Oakland Police Department (OPD) raised the concept of lethal robots to another level by proposing to equip its stable of robots with a gun-shaped “percussion actuated nonelectric disruptor,” or PAN disruptor, which directs an explosive force, typically a blank shotgun shell or pressurised water, at suspected bombs while human operators remain at a safe distance.
The OPD ultimately agreed on language that would prohibit any offensive use of robots against people, except for delivering pepper spray.
Given the creeping weaponisation of robots, a group of six leading robotics companies, led by Boston Dynamics, issued an open letter in early October advocating that general-purpose robots should not be weaponised.
“We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work raises new risks of harm and serious ethical issues,” the letter stated.
“Weaponised applications of these newly capable robots will also harm public trust in the technology in ways that damage the tremendous benefits they will bring to society. For these reasons, we do not support the weaponisation of our advanced-mobility general-purpose robots.”
Robots have a track record of insecurity
Given the growing prevalence of robots in military, industrial, and healthcare settings, much research has been conducted on the security of robots. Academic researchers in Jordan developed an attack tool to perform specific attacks. They successfully breached the security of a robot platform called PeopleBot, launched DDoS attacks against it, and stole sensitive data.
Researchers at IOActive attempted to hack some of the more popular home, business, and industrial robots available on the market. They found critical cyber security issues in several robots from multiple vendors, leading them to conclude that current robot technology is insecure and susceptible to attacks.
Researchers at Trend Micro looked at the extent to which robots can be compromised. They found the machines they studied running on outdated software, vulnerable OSes and libraries, weak authentication systems, and default, changeable credentials. They also found tens of thousands of industrial devices residing on public IP addresses, increasing the risk that attackers can access and hack them.
Víctor Mayoral-Vilches, founder of robotics security company Alias Robotics, wrote The Robot Hacking Manual because, “Robots are often shipped insecure and, in some cases, fully unprotected.” He contends that defensive security mechanisms for robots are still in the early stages and that robot vendors do not generally take responsibility in a timely manner, extending the zero-day exposure window to several years on average.
"Robots can be compromised either physically or remotely by a malicious actor in a matter of seconds,” Mayoral-Vilches tells CSO. “If weaponised, losing control of these systems means empowering malicious actors with remote-controlled, potentially lethal robots. We need to send a clear message to citizens of San Francisco that these robots are not secure and thereby aren't safe.”
Earlier this year, researchers at healthcare IoT security company Cynerio reported they found a set of five critical zero-day vulnerabilities, which they call JekyllBot:5, in hospital robots that enabled remote attackers to control the robots and their online consoles.
“Robots are incredibly useful tools that can be used for many, many different purposes,” Asher Brass, head of cyber network analysis at Cynerio, tells CSO. But, he adds, robots are a double-edged sword. “If you're talking about a lethal situation or anything like that, there are huge drawbacks from a cybersecurity perspective that are quite frightening.”
“There's a real disconnect between leadership in any position, whether it be political, hospital, etc., in understanding the functionality that they're voting to approve or adopting, versus understanding the real risk there,” Chad Holmes, cybersecurity evangelist at Cynerio, tells CSO.
Steps to improve robotic security
When asked about the specific robots SFPD listed in its military-use inventory, machines made by robotics companies REMOTEC, QinetiQ, iRobot, and Recon Robotics, Mayoral-Vilches says many of these systems are based on the legacy Joint Architecture for Unmanned Systems (JAUS) international standard.
“We've encountered implementations of JAUS which aren't up to date in terms of security threats. There's just not enough dialogue about cyber-insecurity among JAUS providers.”
According to Mayoral-Vilches, a better option for more secure robots would be the “more modern” Robot Operating System 2 (ROS 2), which is “an alternative robotic operating system that's increasingly showing more and more concern about cyber-insecurity.”
It is not just the manufacture of the devices themselves that is a concern, it is how they’re deployed in the field and the operators deploying them. “It's not just the devices, the robots themselves, how they were developed, how they're secured, it's also how they're being deployed and being used,” Holmes says.
“When it comes to putting them in the field with a bunch of police officers, if they aren't deployed correctly, no matter how secure they are, they could still be susceptible to attack, takeover, etc. So, it's not just about manufacturing; it’s also about who's using them.”
Mayoral-Vilches thinks the following four essential steps could go a long way to improving the security of robots in the field:
- Proper and updated threat models should be maintained by authorities managing these systems (or external consultants) and the threat landscape of new risks derived from security research (new flaws) should be assessed periodically.
- Independent robotics and security experts should periodically conduct thorough security tests on each one of these systems (jointly and independently).
- Each system should include a tamper-resistant black box-like subsystem to forensically record all events and these should be analysed after each mission.
- Each system should include a remote (robot) kill switch, which must prevent operation if necessary.
For now, however, Mayoral-Vilches believes that police force use of lethal robots would be “a terrible mistake.” It is “ethically and technically a bad decision. Robotics is far from mature, especially from a cybersecurity perspective.”
Not everyone agrees that law enforcement’s use of robots equipped- to-kill is a bad idea.
“If you just said you had a tool that would allow police to safely stop a sniper from killing more people without endangering a bunch of your policemen, and that the decision on whether or not to explode the device would be made by people… I would be in favour of it,” Jeff Burnstein, president of the Association for Advancing Automation, tells CSO, adding that his association has not taken a position on the issue.
“I would not support that same situation if the machine were making the decision. To me, that's a difference.”