After a week of discussions regarding the future of “lethal autonomous weapon systems” (LAWS) in Geneva, Russia took an in-character stand against the United Nations’ position that drones should not be developed to autonomously conduct military operations that involve taking human lives without any human operator input.  Although the intention of the UN meeting was to establish a framework for banning fully autonomous “killer robots,” Russia’s refusal to accept a ban, or even any form of regulation regarding the topic, made for too large a hurdle for the assembly to overcome.

According to a Russian statement released during the event, Russia believes the current understanding of the ability and applications of such robots is lacking in the international community, and as such, banning or regulating such an enterprise would be a futile effort.

Certainly, there are precedents of reaching international agreements that establish a preventive ban on prospective types of weapons. However, this can hardly be considered as an argument for taking preventive prohibitive or restrictive measures against LAWS being a by far more complex and wide class of weapons of which the current understanding of humankind is rather approximate.”

Russia made a well-reasoned argument against stifling the development of new technologies with further reaching implications than simply producing “killer” robots, though it could be argued that participating in meaningful discussion about how to responsibly pursue new forms of military robotics could still have produced positive results.

The difficulty of making a clear distinction between civilian and military developments of autonomous systems based on the same technologies is still an essential obstacle in the discussion on LAWS. It is hardly acceptable for the work on LAWS to restrict the freedom to enjoy the benefits of autonomous technologies being the future of humankind.”

Although it was apparently Russia’s unwillingness to allow the conversation to steer toward regulation that threw the event off course, Indian Ambassador Amandeep Singh Gill, the chairperson of the Group of Governmental Experts in attendance, also released a position paper that seemed to echo the Russian sentiment that it’s far too early to be discussing regulation or limitation on a technology that is still in its infancy.  Some have criticized Gill’s position and comments throughout the event, however, as he was effectively responsible for directing the talks, and seemed disinterested in pursuing the line of discussion the group had been gathered for.

A lot states came prepared to talk about definitions. That’s what the mandate was,” DefenseOne reported one unnamed participant as saying. For a governmental delegation “to put out a position paper like that, it has to get vetted through a lot of parts of your government… it was discouraging. It’s important that States feel like they’re vested in the process.”

Of course, the international discussion regarding how, or if, nations should be regulated in their use of robots on the battlefield is sure to be ongoing – especially as advancements in drone technology continue to blur the lines between operations that require direct human interaction and those that can be conducted through advancing artificial intelligence and decision-making algorithms.  While, within the United States, people continue to debate the concept of self-driving cars making decisions between who lives and who dies, it stands to reason that the debate about robots that make those decisions directly, will rage on well beyond the days when the first fully autonomous war fighters first reach the battlefield.

 

Image courtesy of the Russian Ministry of Defense