Expert Analysis

Domo Arigato, Mr. Interrogato: A.I. in the role of intelligence gathering

As advances in artificial intelligence (A.I.) and robotics have progressed at breakneck speed in recent years, it is no secret that law enforcement and the military are eager to find ways to take advantage of this. But the heavily armed drones capable of delivering death from above or the bomb-clearing robot used against IEDs and explosives all share a fundamental weakness: both require a constant or near-constant human operator to make decisions. As a result, robots and computers have taken tasks that are either too dangerous, dull, dumb, or dirty for most people to adequately perform nowadays. So the idea of a robotic A.I. capable of emulating, appearing like, and even understanding humans still seems way off. More specifically, an A.I. in robotic form actually performing interrogations or source meetings is the stuff of science fiction. Or is it?

Reading humans

Recently, researchers have claimed to have developed a system capable of reading facial patterns to determine emotions. The majority of human communications is nonverbal (depending on studies and circumstances, anywhere from 60 to 90 percent), so the idea of using body language to determine mindset is not out of the question. It’s something that interrogators in law enforcement and the military receive training, in varying degrees, to spot. Unfortunately, it’s something that most people don’t grasp easily, especially during the course of an interrogation or meeting. But researchers have capitalized on that concept by programming the A.I. to focus on the lips and eyes of the person in question. Why those areas? Because they are typically the most expressive facial features. And by doing this, it bypasses the traditional problems A.I. faces when deciphering the subtleties in human language. Researchers claimed this helped the A.I. successfully determine criminals from a general line at an 83 percent success rate. The article went on to claim this could potentially lead to A.I. interrogating or counseling.

Yeah, I don’t need A.I. telling me this guy is a criminal. | Image courtesy of MPR news

Appearing human

The next hurdle is to make the A.I. visually appealing to the person it is trying to engage. In most circumstances, taking on a form we would consider human-like (which would be defined as an Android), is good enough. It wouldn’t have to look exactly like a human, just enough to convey conversations in a meaningful way in which humans can relate. For general military purposes such as combat, observations, and general interaction, you just need a machine that emulates the human form in stature and form. This has already been explored to great effect in the entertainment industry, from the android doppelganger Maria in the 1927 film “Metropolis,” all the way up to Ethan/E3N in the recent game “Call of Duty: Infinite Warfare.”

You've reached your daily free article limit.

Subscribe and support our veteran writing staff to continue reading.

Get Full Ad-Free Access For Just $0.50/Week

Enjoy unlimited digital access to our Military Culture, Defense, and Foreign Policy coverage content and support a veteran owned business. Already a subscriber?

As advances in artificial intelligence (A.I.) and robotics have progressed at breakneck speed in recent years, it is no secret that law enforcement and the military are eager to find ways to take advantage of this. But the heavily armed drones capable of delivering death from above or the bomb-clearing robot used against IEDs and explosives all share a fundamental weakness: both require a constant or near-constant human operator to make decisions. As a result, robots and computers have taken tasks that are either too dangerous, dull, dumb, or dirty for most people to adequately perform nowadays. So the idea of a robotic A.I. capable of emulating, appearing like, and even understanding humans still seems way off. More specifically, an A.I. in robotic form actually performing interrogations or source meetings is the stuff of science fiction. Or is it?

Reading humans

Recently, researchers have claimed to have developed a system capable of reading facial patterns to determine emotions. The majority of human communications is nonverbal (depending on studies and circumstances, anywhere from 60 to 90 percent), so the idea of using body language to determine mindset is not out of the question. It’s something that interrogators in law enforcement and the military receive training, in varying degrees, to spot. Unfortunately, it’s something that most people don’t grasp easily, especially during the course of an interrogation or meeting. But researchers have capitalized on that concept by programming the A.I. to focus on the lips and eyes of the person in question. Why those areas? Because they are typically the most expressive facial features. And by doing this, it bypasses the traditional problems A.I. faces when deciphering the subtleties in human language. Researchers claimed this helped the A.I. successfully determine criminals from a general line at an 83 percent success rate. The article went on to claim this could potentially lead to A.I. interrogating or counseling.

Yeah, I don’t need A.I. telling me this guy is a criminal. | Image courtesy of MPR news

Appearing human

The next hurdle is to make the A.I. visually appealing to the person it is trying to engage. In most circumstances, taking on a form we would consider human-like (which would be defined as an Android), is good enough. It wouldn’t have to look exactly like a human, just enough to convey conversations in a meaningful way in which humans can relate. For general military purposes such as combat, observations, and general interaction, you just need a machine that emulates the human form in stature and form. This has already been explored to great effect in the entertainment industry, from the android doppelganger Maria in the 1927 film “Metropolis,” all the way up to Ethan/E3N in the recent game “Call of Duty: Infinite Warfare.”

Combat operations is one thing, but for the purposes of human intelligence (HUMINT) or counter-intelligence (CI), a more realistic human appearance is going to be needed. It’s hard for a faceless computer screen to build any rapport or relation with a prisoner or source, at least initially. But making an android that can pass for human is, well, hard. If you don’t succeed in making a robot realistic enough, it fails the “Uncanny Valley” hypothesis. It’s the point where a human replica fails to elicit any feeling beyond that of eeriness and revulsion. This observation is not restricted to robots; it can be brought up by badly rendered 3D visuals and dolls among some people.

But are we getting there visually? No doubt scientists and engineers are always working to come up with the most realistic human-like robot they can create. When Shanghai held the 2016 World Robot Conference, visitors were impressed by “Jiajia,” an android capable of conversations, answering questions, and recognizing facial expressions. Although Jiajia’s speech was a bit garbled and any prolonged sessions with “her” would reveal robotic limitations, it was big step forward in realism.

The first step will be taking over your heart. Next, the world. | Image courtesy of AFP news

Watch the video below showing the android “Sophia,” created by Dr. David Hanson, and the way she responds and talks. Again, you won’t confuse her for a real person, but Sophia is approaching the point where it almost exceeds the Uncanny Valley threshold. Well almost, until it talks about destroying humans with that eerie stare.

Understanding humans

So we are slowly approaching the point where A.I. can spot human emotion, and at least appear human, if not quite act like one. What is needed is to make the A.I. properly responsive to human emotions and speech? In 1950, Alan Turing, an English mathematician, and scientists who helped decrypt German codes during WWII developed a test that would determine the ability of a machine to exhibit intelligent behavior indistinguishable from the average human being. For the longest time, most tests were confined to a simple computer screen and keyboard with a human participant free to engage in the conversation, but not knowing if subsequent responses were coming from another human or computer.

The test was considered “successful” if the participant could not reliably tell if the response came from a human or machine. (Different tests have set different success rates as the bar. Turing said anything over a 30 percent success rate was good enough.) There have been several machines and programs that have supposedly passed this test over the years, most notably a program emulating a 13-year-old boy called “Eugene Goostman” in 2014. Critics have argued that this test, as well as others using Turing’s hypothesis, were inaccurate, prone to changing variables set up by the human judges. In fact, other thought experiments, such as the “Chinese Room Argument,” have been more critical of distinguishing the difference between A.I. understanding thought (strong A.I.) and merely simulating thought (weak A.I.).

So for now it’s probably safe to say if we do see independently operating A.I. robots in the military, it will be first fighting on the battlefield. Most likely, they will be placed in support roles, or in a role that will augment human soldiers on the ground. I’d envision something akin to what we saw in the film “Chappie,” which centered around a robot-augmented police force in the city of Johannesburg.

https://youtu.be/xE0zghsQLAU

A brave new world

The gold standard, performing intimate human encounters required in most HUMINT, CI, or analytical work, is still some ways off. I think there will be some time before A.I. will be “strong” enough to understand and respond accordingly to human patterns and thinking. Most likely, programs will act as support channels, such as data-mining or watching body language to catch signs of deception from detainees or sources. Sort of like a virtual reactive polygraph test.

A.I. will also continue to fill in various training requirements. Several years ago, the U.S. Army was actually exploring a program to help us in interrogation training. It was like a reverse Turing test, where we the interrogators would ask the virtual detainee, a computer avatar, questions to practice basic interrogation techniques. From what I remember, it was still very much a work in progress. The answers were sometimes vague, and the computer A.I. would often misinterpret or misunderstand our verbal questions. Even as a basic tool that intentionally did not emulate a true one-on-one interrogation, it was still lacking to say the least.

The biggest hurdle for A.I. is learning context, something that requires experience and exposure to human interaction. The show “Persons of Interest” delved into this topic, with the A.I. watchdog program called “the Machine” requiring long-term guidance and observation of human society to comprehend the difference between malicious and justifiable acts of violence.

Perhaps the biggest question is, should we go down this trail? Should there be any moral qualms about letting A.I. slowly take over increasing roles in the military? That I don’t know. But just in case it’s inevitable, I for one, welcome our robot overlords.

Yes, indeed Kent. | Image courtesy of Fox Broadcasting Company

Sources:

The rise of the robot interrogator

Uncanny valley: why we find human-like robots and dolls so creepy

The Chinese ‘robot goddess who can hold a conversation and could replace human workers within a decade 

The truth about the Turing Test

Chinese Room Argument

The Police Are Recruiting Interrogation Bots 

Featured illustration courtesy of author

About Leo W. View All Posts

is a U.S. Army Veteran who spent five years active duty with 4th POG (currently MISOG) out of Fort Bragg. After not learning his lesson completely, he spent an additional five more years in the U.S. Army Reserves being kicked around different units as a HUMINT Collector.

COMMENTS

You must become a subscriber or login to view or post comments on this article.

More from SOFREP

REAL EXPERTS.
REAL NEWS.

Join SOFREP for insider access and analysis.

TRY 14 DAYS FREE

Already a subscriber? Log In