Giving troops on the ground the ability to confirm the identity of a target offers a value that goes without saying. Positive identification, in particular, matters when taking someone into custody, or confirming a high value target has been killed, and in both instances, there’s a good chance American troops will be tasked with making that confirmation under cover of darkness.

While there is technology developed specifically to assist in identifying targets, these systems rely on comparing visual cues to a database of images – something that requires the sort of excellent lighting that often isn’t available to special operations troops, particularly in environments that require light discipline.

It’s with these unique challenges in mind that the Army recently announced the development of a new system that utilizes artificial intelligence and a process known as “machine learning” to cross reference thermal imaging with traditional file photographs. While the process is complex, the product is simple: a means to verify a target’s identity in nighttime conditions.

“This technology enables matching between thermal face images and existing biometric face databases/watch lists that only contain visible face imagery,” said Dr. Benjamin S. Riggan, a research scientist assigned to the project. “The technology provides a way for humans to visually compare visible and thermal facial imagery through thermal-to-visible face synthesis.”

“Machine learning” is an artificial intelligence based form of data analysis that leverages a computer’s ability to identify patterns and make smart decisions with limited interference from a human operator. In other words, it’s a means of allowing a system to learn as it goes and make limited extrapolations based on the data available to it. That will allow the system to work to match visual cues on a target’s face, despite the source imagery and comparison files depicting images in different realms of the light spectrum.

When using thermal cameras to capture facial imagery, the main challenge is that the captured thermal image must be matched against a watch list or gallery that only contains conventional visible imagery from known persons of interest,” Riggan said.

“Therefore, the problem becomes what is referred to as cross-spectrum, or heterogeneous, face recognition. In this case, facial probe imagery acquired in one modality is matched against a gallery database acquired using a different imaging modality.”

A conceptual illustration for thermal-to-visible synthesis for interoperability with existing visible-based facial recognition systems. (Courtesy Eric Proctor, William Parks and Benjamin S. Riggan)

The system was first unveiled in a peer-reviewed paper released in March, entitled, “”Thermal to Visible Synthesis of Face Images using Multiple Regions.” Soon after publication, Army researchers gave a demonstration of the technology at the IEEE Winter Conference on Applications of Computer Vision, or WACV, in Lake Tahoe, Nevada.

In March’s demonstration, the team showed how the system already works in near real time, utilizing a laptop to manage the computations and a FLIR Boson 320 thermal camera to provide the imaging. The demonstration proved the concept can work, and because it is already manageable using little more than a laptop and camera, it likely wouldn’t take long to adapt it to a deployable platform.

It remains unclear if the Army ultimately plans to equip soldiers in the field with this system, as soldiers wearing individual FLIR cameras could use it to identify targets quickly and easily without compromising their positions – however, it also seems likely that the system could be housed on networked computers in a safe location, and rely on relayed footage from soldiers in the field to conduct the verification from a distance. That would permit use of the system without increasing the weight soldiers have to carry in combat.

Feature image courtesy of the Department of Defense