Killer robots are incredibly hotly debated for a technology that doesn't actually exist yet.
The term broadly refers to any theoretical technology that can deliberately deploy lethal force against human targets without explicit human authorisation.
While a drone might identify a potential target, it will always await for human commands — for its controller to "pull the trigger." But Lethal Autonomous Weapons (LAWs), as "killer robots" are more technically referred to, may be programmed to engage anyone it identifies as a lawful target within a designated battlefield, without anyone directly controlling it and without seeking human confirmation before a kill.
It's the subject of significant ongoing research and development, and unsurprisingly, it has proved wildly controversial. NGOs and pressure groups are lobbying for LAWs to be preemptively banned before they can be created because of the risks they allegedly pose. "Allowing life or death decisions to be made by machines crosses a fundamental moral line,"argues the Campaign to Ban Killer Robots.
But there are also strong arguments in favour of developing LAWs, from a potential reduction in human casualties to increased accountability — as well necessity in the face of rapidly evolving threats, everywhere from the physical battlefield to cyberspace.
William Boothby, a former lawyer and Air Commodore in the RAF, has contributed to pioneering research on the subject of LAWs, and holds a doctorate in international law. Business Insider spoke to him to get his perspective of why "killer robots," in some circumstances, aren't actually such a bad idea.
“You don’t get emotion. You don’t get anger. You don’t get revenge. You don’t get panic and confusion. You don’t get perversity,” Boothby says.
And that’s just the start.
This interview has been edited for length and clarity.
Autonomous weapons could save civilian lives — and we’re closer to them than you might expect.
Rob Price: Do you support the development of lethal autonomous weapons — and in what circumstances?
Dr. William Boothby: Well, I wouldn’t put it in those blunt terms. I support the research and the development of the technology, with a view to achieving autonomous systems which are able to operate at least on a more reliable basis than human beings.
I recognise that there are in existence certain technologies already, such as Iron Dome [an automated Israeli missile defence system] and Phalanx [a naval defensive weapons system], where what you have essentially is a system that works autonomously when certain events occur.
But there is a distinction between “point defence” and what you could call an offensive system — the latter being a system which goes out and seeks its own target, as opposed to one like Iron Dome that is there to wait until rockets are inbound and then take them out.
The distinction is based on the notion that if you’re engaged in point defence, and if you have programmed the system appropriately so that it only reacts to what would be legitimate threats — i.e. rockets but not airliners, then there ought not to be a problem.
However, the minute that we’re talking about something going out on the offensive for objects to attack, then we are talking about something that is rather more problematic — because all of those complications within targeting law come into play in a way that they don’t necessarily when you’re dealing with point defence.
Price: So what are the most compelling arguments for using autonomous weaponry in an offensive capacity?
Boothby: I think that if you’re looking into the future, the only way you can interpret arguments for and against is by looking at the potential nature of the future battlespace.
I am clear in my own mind that autonomy in the future will gradually emerge in all environments — in the land environment, in the air environment, in the surface and subsurface sea in the environment, and in cyberspace and outer space.
Increasingly, you are going to see the human beings as the weakest link in the operation of both offensive and defensive systems, and the problem is that potentially you’re going to be in a situation where speed is going to be the challenge — rendering autonomy as essential.
Or, you are talking in terms of such a mass of a threat that the human being is going to be the weakest link because they just can’t compute in relation to scale, scope, and extent of the inbound threat.
Secondly, any discussion about autonomy in isolation is nonsense.
One has to talk about autonomy in terms of what it is being developed in order to counter, and if you have a situation in which, for instance, the threat is never going to be prohibited, what on earth is the justification for prohibiting the only possible way of responding?
This is all in very vague and theoretical terms. so here is an example:
Imagine a soldier has been given the job of clearing a row of houses with his patrol.
They haven’t a clue whether there’s terrorists in those houses, or peaceful families. They’re going down a brightly sun-lit street going from one house to the next and as this soldier goes into one particular building, he’s terrified. He goes from the light into the darkness. And in the darkness he detects movement. And in terror he empties his gun inside that particular room and kills all the occupants.
And it’s only afterwards that it’s worked out that the movement was that of a baby.
Yet, imagine the possibility of designing the type of technology where the machine would be capable of going inside the building and would have sensors that are able to distinguish between the movement of a large metallic object like a weapon and something lacking that metallic content — and would potentially be in a position to save those lives.
So, what is it that machines have that human beings don’t? Clearly, you don’t get emotion. You don’t get anger. You don’t get revenge. You don’t get panic and confusion. You don’t get perversity, in the sense that machinery won’t go rogue.
However, because the machinery has been made by human beings you do get fallibility.
There is currently no international law that specifically applies to autonomous weapons.
Price: What’s the current legal status of autonomous weapons?
Boothby: The international law that applies to autonomous weapon technologies is exactly the same international law that applies to any other weapon technology.
There are basic principles that apply to all states, and specific rules about particular technologies.
There is a prohibition on the use of any weapon system that is of a nature to cause any unnecessary injury or suffering for which there is no corresponding military purpose. Adding an irritant to a bullet so that in addition to inflicting the kinetic injury it would also cause an additional irritant suffering effect for which there is no corresponding military purpose. That’s rule number one. It applies to all states and all weapons.
Rule number two is it is prohibited to use all weapons that are indiscriminate by nature i.e. which you can’t direct at a particular target, or the effect of which you cannot reasonably limit to the chosen target.
Thirdly, it is prohibited to use weapons which have prohibited damaging effects on the natural environment.
There are no specific rules dealing with autonomy. But the autonomous weapon system may use a particular injuring or damaging technology which itself may be the subject of a specific provision.
For example, an autonomous mine system, if it’s an anti-personnel mine, would be prohibited in states that are party to the Ottawa convention. If it’s not, there are lots of other treaties that have technical provisions dealing with vehicle mines.
So, if you were wanting to talk about the autonomous nature of the thing specifically, then there is no ad-hoc legal provision dealing with autonomy.
It doesn’t stop there. The issue is this: In the hands of its user, a weapon is that user’s tool that they use as an instrument to cause damage.
Once you’re discussing a weapon that is autonomous, you are talking about something where it isn’t the individual who is deciding what specifically is to be targeted but the weapon itself. Therefore, that brings in the law that relates to targeting.
The question then becomes whether the autonomous weapon system is capable of being used in accordance with targeting law rules that would normally be implemented by a human being.
There are some elements of the targeting law rules that autonomous weapon technology will be capable of addressing because, for example, the weapon system can be designed specifically to recognise an object that constitutes a military objective i.e. a lawful target.
Targeting law also requires an attacker to consider whether a planned attack would be indiscriminate.
When you are thinking about that sort of evaluative decision making, at the moment, autonomous technology would not be capable of doing that. There may, however, be circumstances where an autonomous weapon system can be used legitimately at the moment.
For example, imagine that you were undertaking military operations in areas of desert, or areas of remote open ocean. You may know because of patterns of life and surveillance that you’ve done, what you would expect the sensors to see — and you could simply program the weapon system not to attack if the sensors see anything other than that which is expected.
But the minute that you move down the scale to more congested, urban targeting environments, the more difficult it will be to justify the use of current autonomous technologies.
Killer robots get to the heart of the question: “What is the nature of warfare?”
Price: Do you think autonomous weaponry could make warfare safer and more accountable?
Boothby: I think that there is that possibility — if technology develops appropriately in that direction and if these new systems are only deployed when they have been improved and tested appropriately and used responsibly. There is the potential for civilian casualties to be reduced somewhat by the use of autonomous weapons systems.
But the argument by some is the other way. The argument is that once you’ve got machines and the grotesque warfare consisting of machine versus machine, without too much human involvement, involving one’s self in such warfare actually becomes that much easier.
I would think that there’s a fairly significant element of the ethical about this, in the sense that you would have to ask yourself at some point in the future 'what is the nature of warfare? What is warfare? What is it all about?' Is it all about machine versus machine? You’ll hear the argument that ‘I am prepared to take my chances in warfare but I do not accept being killed by the decision of a machine.’ Then you’ll hear others turning around and saying ‘I don’t want to be killed whether it’s by human or machine.’ I think it is very difficult to know how the ethical side is going to play.
I think there’s a tendency of people to look at technology as it is now and look into the future and say is that technology acceptable? I would ask myself whether there is merit in going in the reverse direction.
Imagine ourselves in a situation in which we have developed machine versus machine warfare and we have all become used to it. How acceptable would it be to go back to the arrangements that we had previously?
You don’t get that being discussed often in those terms, because people don’t seem to think in that way. There’s a tendency of human beings to think in a single direction when sometimes it’s useful to think in reverse.
Of course, anyone who is talking about machine warfare as no-casualty warfare is in cloud cuckoo land Let’s be honest, there’s always going to be victims and it is always going to be a tragedy.
Price: Is there a risk that that autonomous weaponry could encourage more destructive wars when soldiers’ lives aren’t at stake?
Boothby: There’s all sorts of possibilities, and that’s one of them. And then there's also the worry about what happens when autonomous technology gets in the hands of non-state actors.
So yes, maybe is my answer to this one. There’s a lot of speculation about some of these questions.
We delude ourselves if we look at one particular type of tech in isolation. I think we need increasingly to recognise that at the same time that autonomous technologies are being developed, other technologies are being developed as well — notably cyber.
And the minute you start thinking of autonomous technologies, you should then start worrying, or thinking, about the potential for cyber techniques potentially to be used to get inside an enemy’s autonomous weapons system, and either take them over or distort the way they make their decision making, or whatever.
Equally, there are other challenges. A lot of autonomy is going to be based on the use of artificial intelligence. It’s going to be what I described in the second edition of my book, "Weapons and the Law of Armed Conflict," as artificial learning intelligence (ALI) as opposed to artificial intelligence simpliciter as it were.
What we’re talking about is the ability of a machine to learn lessons, and learn its own lessons — not necessarily the lessons it’s been told to learn.
So then you get into the question of, right, it may be learning lessons other than the ones you told it to learn, but have you told it which lessons it musn’t learn, and have you thought through which lessons it aught not to learn, and why, and checked that the system you’re deploying is going to be safe from that perspective?
See the rest of the story at Business Insider