Earlier this month, Google employees made a stir in Silicon Valley when a number of them chose to resign from their positions in protest after their company agreed to work with the Defense Department on a new artificial intelligence initiative. Overwhelmingly, the media presented this gesture as an ethical stand — with tech professionals doing their part to stem the tide of Terminator robots roving a nearby battle space, making complex decisions about who lives and who dies with seemingly no human supervision. These departing Googlers, then, were heroes — begging society to ask hard questions about what we’re capable of doing and whether we should do it at all.
Of course, the reality of the situation didn’t quite sync up with the dramatic headlines and lofty narratives presented in petitions and Op-eds. The truth of the matter is, Project Maven is indeed a Google partnered artificial intelligence endeavor, but it never aimed to make decisions about pulling any triggers. What the Pentagon is really looking for is help sifting through the mountains of data created by counter-terrorism and national defense assets all over the world, to more quickly and accurately identify trends, threats and targets without forcing analysts to pour over drone feeds frame by frame. In the complex world of combat operations, seconds can mean the difference between accomplishing an objective and missing it, or worse, between living or dying. Quickly and accurately identifying the information of import to an operator on the ground may not be as dynamic as developing the Skynet-like apocalypse AI some acted like Maven was, but for the nation’s increasingly over-tasked special operations community, it could legitimately be a life saver.
“We are getting so much information that we can’t go through it all,” said Glen Cullen, program manager for sensitive site exploitation within the program executive office for special reconnaissance, surveillance and exploitation. “We need to have it triaged. We need to be able to identify what’s important from massive volumes of information.”
You've reached your daily free article limit.
Subscribe and support our veteran writing staff to continue reading.
Earlier this month, Google employees made a stir in Silicon Valley when a number of them chose to resign from their positions in protest after their company agreed to work with the Defense Department on a new artificial intelligence initiative. Overwhelmingly, the media presented this gesture as an ethical stand — with tech professionals doing their part to stem the tide of Terminator robots roving a nearby battle space, making complex decisions about who lives and who dies with seemingly no human supervision. These departing Googlers, then, were heroes — begging society to ask hard questions about what we’re capable of doing and whether we should do it at all.
Of course, the reality of the situation didn’t quite sync up with the dramatic headlines and lofty narratives presented in petitions and Op-eds. The truth of the matter is, Project Maven is indeed a Google partnered artificial intelligence endeavor, but it never aimed to make decisions about pulling any triggers. What the Pentagon is really looking for is help sifting through the mountains of data created by counter-terrorism and national defense assets all over the world, to more quickly and accurately identify trends, threats and targets without forcing analysts to pour over drone feeds frame by frame. In the complex world of combat operations, seconds can mean the difference between accomplishing an objective and missing it, or worse, between living or dying. Quickly and accurately identifying the information of import to an operator on the ground may not be as dynamic as developing the Skynet-like apocalypse AI some acted like Maven was, but for the nation’s increasingly over-tasked special operations community, it could legitimately be a life saver.
“We are getting so much information that we can’t go through it all,” said Glen Cullen, program manager for sensitive site exploitation within the program executive office for special reconnaissance, surveillance and exploitation. “We need to have it triaged. We need to be able to identify what’s important from massive volumes of information.”
Projects like Maven aim to use the same sort of algorithms Google uses in things like their image search function to identify elements of an image that warrant human investigation. The system’s AI makes no decisions further than that: simply spotting something within the parameters it was told to look for and informing a human operator that this clip of footage, document, or image may contain something worthy of human assessment.
The endeavor to help American troops on the ground quickly sort through intelligence is a far cry from the open letter tech industry insiders addressed to Google last week, which claimed they were working to develop “algorithms meant to target and kill at a distance and without public accountability.”
“Our guys are operating worldwide, working in a country [where they] may not know the language,” Cullen said. “You get a document, it’s got some key words in it and you’re wondering, ‘Hey, is this some high school kid’s chemistry homework, or is it a formula to make a bomb?’” Using the same sort of AI currently harnessed for high schooler’s French homework, AI could help identify targets in drone footage, quickly translate important words or symbols found on documents, and perhaps most importantly, enable operators to quickly sift through materials found on site that might otherwise not survive further investigation.
There are a lot of guys now that are setting up hard drives where as soon as you go to try and exploit it, it wipes the hard drive. If I’ve got one opportunity to search an iPhone or search a hard drive and I’m done, I might want to know what I’m up against before I actually go in and start messing something up.”
Artificial Intelligence has become a tech-industry buzzword, thanks in no small part to the comments of high-profile figures like Elon Musk, who regularly opines about the threat he believes AI poses to the future of humanity. This mentality, however, suggests that artificial intelligence is something on the horizon, rather than a tool that’s already being used by millions on a daily basis. AI may indeed one day become so powerful that it warrants legitimate discussion, but today, facets of that all-encompassing term are already in use inside your internet browser — identifying elements of images without any human interaction to provide you with pictures that match your search criteria.
That mindset toward AI is evident in former Google — presently Alphabet (Google’s parent company) — CEO Eric Schmidt’s recent statements about this very topic.
“I think Elon is exactly wrong,” he told a crowd at the VivaTech conference in Paris last week. “I think that AI is going to unlock a huge amount of positive things, whether that’s helping to identify and cure diseases, to help cars drive more safely, to help keep our communities safe.”
Image courtesy of the Department of Defense
Promotion of Delta Force Trained General Who Led 82nd Airborne Division During Afghanistan Evacuation Held Up By Senate
US Navy to Sideline 17 Support Ships to Address Civilian Mariner Shortage
Inside Delta Force: America’s Most Elite Special Mission Unit
SOFREP Weekly-Former Navy SEAL Exposes Biden’s Risky Move: Missiles Into Russia
A Simple Software Upgrade Might Have Saved This F-18 Pilot’s Life
Join SOFREP for insider access and analysis.
TRY 14 DAYS FREEAlready a subscriber? Log In
COMMENTS
You must become a subscriber or login to view or post comments on this article.