Earlier this month, Google employees made a stir in Silicon Valley when a number of them chose to resign from their positions in protest after their company agreed to work with the Defense Department on a new artificial intelligence initiative. Overwhelmingly, the media presented this gesture as an ethical stand — with tech professionals doing their part to stem the tide of Terminator robots roving a nearby battle space, making complex decisions about who lives and who dies with seemingly no human supervision. These departing Googlers, then, were heroes — begging society to ask hard questions about what we’re capable of doing and whether we should do it at all.
Of course, the reality of the situation didn’t quite sync up with the dramatic headlines and lofty narratives presented in petitions and Op-eds. The truth of the matter is, Project Maven is indeed a Google partnered artificial intelligence endeavor, but it never aimed to make decisions about pulling any triggers. What the Pentagon is really looking for is help sifting through the mountains of data created by counter-terrorism and national defense assets all over the world, to more quickly and accurately identify trends, threats and targets without forcing analysts to pour over drone feeds frame by frame. In the complex world of combat operations, seconds can mean the difference between accomplishing an objective and missing it, or worse, between living or dying. Quickly and accurately identifying the information of import to an operator on the ground may not be as dynamic as developing the Skynet-like apocalypse AI some acted like Maven was, but for the nation’s increasingly over-tasked special operations community, it could legitimately be a life saver.
“We are getting so much information that we can’t go through it all,” said Glen Cullen, program manager for sensitive site exploitation within the program executive office for special reconnaissance, surveillance and exploitation. “We need to have it triaged. We need to be able to identify what’s important from massive volumes of information.”
Projects like Maven aim to use the same sort of algorithms Google uses in things like their image search function to identify elements of an image that warrant human investigation. The system’s AI makes no decisions further than that: simply spotting something within the parameters it was told to look for and informing a human operator that this clip of footage, document, or image may contain something worthy of human assessment.
The endeavor to help American troops on the ground quickly sort through intelligence is a far cry from the open letter tech industry insiders addressed to Google last week, which claimed they were working to develop “algorithms meant to target and kill at a distance and without public accountability.”
“Our guys are operating worldwide, working in a country [where they] may not know the language,” Cullen said. “You get a document, it’s got some key words in it and you’re wondering, ‘Hey, is this some high school kid’s chemistry homework, or is it a formula to make a bomb?’” Using the same sort of AI currently harnessed for high schooler’s French homework, AI could help identify targets in drone footage, quickly translate important words or symbols found on documents, and perhaps most importantly, enable operators to quickly sift through materials found on site that might otherwise not survive further investigation.