Killer robots — those autonomous armed robots programmed to randomly kill humans — don’t exist yet, although artificial intelligence laboratories are working on them, and the very thought has thrown the world into a tizzy.
Last Sunday, Britain’s Ministry of Defence (MoD) produced a report warning that future warfare will be conducted by armies of robots, soldiers on gene-editing and drugs, and space-war, where cyberspace gives terrorists more opportunity.
Last month, experts from various countries met at the Geneva offices of the United Nations to discuss how to regulate these killer robots. The subsequent report published by Human Rights Watch and Harvard Law School’s International Human Rights Clinic claimed that fully autonomous weapons would violate the Martens Clause, which calls for protection for civilians and combatants.
Last summer, Elon Musk and 105 other signatories petitioned the United Nations to ban killer robots:
Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.
In 2012, Mary Wareham, coordinator of the Campaign to Stop Killer Robots, led a small circle of people on Trafalgar Square, London, with their robot circling the grounds and intoning “Damn killer robots.”
For people like Christine Fair, a military affairs expert and associate professor in the Security Studies Program of the Edmund A. Walsh School of Foreign Service, Georgetown University, the hysteria is absurd. Comparing killer robots to drones, she insisted that:
“Drones are the most effective tool for intimidating and suppressing terrorists. We can either do nothing and allow ourselves to be killed or react with autonomous armed robots that perform better than armed humans in combat, and that result in fewer casualties.”
Killer robots — those autonomous armed robots programmed to randomly kill humans — don’t exist yet, although artificial intelligence laboratories are working on them, and the very thought has thrown the world into a tizzy.
Last Sunday, Britain’s Ministry of Defence (MoD) produced a report warning that future warfare will be conducted by armies of robots, soldiers on gene-editing and drugs, and space-war, where cyberspace gives terrorists more opportunity.
Last month, experts from various countries met at the Geneva offices of the United Nations to discuss how to regulate these killer robots. The subsequent report published by Human Rights Watch and Harvard Law School’s International Human Rights Clinic claimed that fully autonomous weapons would violate the Martens Clause, which calls for protection for civilians and combatants.
Last summer, Elon Musk and 105 other signatories petitioned the United Nations to ban killer robots:
Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.
In 2012, Mary Wareham, coordinator of the Campaign to Stop Killer Robots, led a small circle of people on Trafalgar Square, London, with their robot circling the grounds and intoning “Damn killer robots.”
For people like Christine Fair, a military affairs expert and associate professor in the Security Studies Program of the Edmund A. Walsh School of Foreign Service, Georgetown University, the hysteria is absurd. Comparing killer robots to drones, she insisted that:
“Drones are the most effective tool for intimidating and suppressing terrorists. We can either do nothing and allow ourselves to be killed or react with autonomous armed robots that perform better than armed humans in combat, and that result in fewer casualties.”
As far as IEEE Spectrum’s senior writer, Evan Ackerman sees it, there’s one simple solution. Make the killer robots ethical, because we’re not going to prevent them from existing:
I think that it will be possible for robots to be as good (or better) at identifying hostile enemy combatants as humans, since there are rules that can be followed (called Rules of Engagement) to determine whether or not using force is justified. For example, does your target have a weapon? Is that weapon pointed at you? Has the weapon been fired? .. These are all things that a robot can determine using any number of sensors that currently exist.
For doom-ridden scientists, like Elon Musk and Stephen Hawking, that argument is irrelevant, since killer robots will turn us into minced meat before we can limit their powers.
Last year, Facebook’s Mark Zuckerberg invited Musk for a meal at his home. Two top researchers from Facebook’s new artificial intelligence lab and two other Facebook executives tried to convince Mr. Musk that he was wrong. The entrepreneur behind SpaceX and the electric-car maker Tesla refused to budge an inch. Later, he warned that killer robots could hustle in World War III. Before his death, Hawking warned AI could destroy humanity if we allow them to control us.
Most of you seem to have had some sort of military experience: What do YOU think? Should we give killer robots a chance or trash them?
COMMENTS
There are on this article.
You must become a subscriber or login to view or post comments on this article.