Op-Ed

Op-Ed: Is machine learning the future of national security, or the cyber equivalent of the F-22?

In this April 23, 2018, photo, Ashley McManus, global marketing director of the Boston-based artificial intelligence firm, Affectiva, demonstrates facial recognition technology that is geared to help detect driver distraction, at their offices in Boston. Recent advances in AI-powered computer vision have spawned startups like Affectiva, accelerated the race for self-driving cars and powered the increasingly sophisticated photo-tagging features found on Facebook and Google. (AP Photo/Elise Amendola)

Article by NEWSREP guest author Sean McWillie —

In May 2018, Defense Secretary Mattis sent a still unreleased memo to President Trump, urging a national level strategy for artificial intelligence (AI). This opaqueness at the national level regarding advances in artificial intelligence indicates two problems: an over-estimation of the harms and benefits such technology can provide, and a lack of expert understanding across all echelons. Another way to look at it is that this dearth of openness on AI lends itself to both doom-and-gloom as well as rose-colored glasses about what kind of change AI portends to unleash, while people who could make marked improvements get left out of the conversation entirely.

The fact that we basically know only that such a memo exists brings with it a solid, sobering fact: while AI rapidly progresses, no one seems to know what to do about it beyond pay attention to it because it seems important. AI has come out of a dormant period, called the ‘AI Winter’, in which funding, interest, and research all but dried up. Recently, its revitalization comes in the form of machine learning (ML). Machine learning allows non-statisticians to do statistical modeling by training computational agents (hence “machine learning”) on known connections between points in data sets, to draw conclusions regarding unknown connections.

You've reached your daily free article limit.

Subscribe and support our veteran writing staff to continue reading.

Get Full Ad-Free Access For Just $0.50/Week

Enjoy unlimited digital access to our Military Culture, Defense, and Foreign Policy coverage content and support a veteran owned business. Already a subscriber?

Article by NEWSREP guest author Sean McWillie —

In May 2018, Defense Secretary Mattis sent a still unreleased memo to President Trump, urging a national level strategy for artificial intelligence (AI). This opaqueness at the national level regarding advances in artificial intelligence indicates two problems: an over-estimation of the harms and benefits such technology can provide, and a lack of expert understanding across all echelons. Another way to look at it is that this dearth of openness on AI lends itself to both doom-and-gloom as well as rose-colored glasses about what kind of change AI portends to unleash, while people who could make marked improvements get left out of the conversation entirely.

The fact that we basically know only that such a memo exists brings with it a solid, sobering fact: while AI rapidly progresses, no one seems to know what to do about it beyond pay attention to it because it seems important. AI has come out of a dormant period, called the ‘AI Winter’, in which funding, interest, and research all but dried up. Recently, its revitalization comes in the form of machine learning (ML). Machine learning allows non-statisticians to do statistical modeling by training computational agents (hence “machine learning”) on known connections between points in data sets, to draw conclusions regarding unknown connections.

Use in national security has unique challenges such as cleaning data, reducing costs, increasing accuracy and so on, that ML scientists do not need to worry about. But no one in their right mind would sleep well at night knowing that their loved ones boarded a plane after being screened by a lowest-bidder ML contractor, or that they carpet bombed a village because a computer said there was a 60% likelihood of it harboring a terrorist cell. Yet these are extant issues with machine learning that researchers have yet to overcome.

This unreleased memo is just another glaring example of a four-fold disconnect between the general population’s assumptions about machine learning, what academics describe in their literature, how the press reports on it, how those actually implement it, and how the press reports on it. It is safe to assume that institutions that use machine learning do not hire subject matter experts (such as in ethics or anti-terror), however it would have been useful to ask more ethical considerations. One example is if a machine learning agent produces a negative outcome, i.e., one that can lead to tort claims or liability under UCMJ, who is the responsible agent?

If a life is on the line, and a machine learning agent fails to flag a terrorist based on their known activity and social connections, who is to blame? It may well be the case that such life-and-death situations are too dire to offload onto machines at this time. Seeing that one of the core procedures of machine learning is training on known connections, connections made by actual humans, the notion of a perfectly rational and infallible machine just doesn’t pass muster.

An even graver issue is that machine learning is reactive. Lessons learned yesterday become tomorrow’s “the machine didn’t catch it.” This is an example of positive bias: machine learning engineers run the risk of trying to catch people using known tactics. This rewards innovation. After all, no one attempts to hide explosives or weapons in their shoes. Do they?

About SOFREP News Team View All Posts

The SOFREP News Team is a collective of professional military journalists. Brandon Tyler Webb is the SOFREP News Team's Editor-in-Chief. Guy D. McCardle is the SOFREP News Team's Managing Editor. Brandon and Guy both manage the SOFREP News Team.

COMMENTS

You must become a subscriber or login to view or post comments on this article.

More from SOFREP

REAL EXPERTS.
REAL NEWS.

Join SOFREP for insider access and analysis.

TRY 14 DAYS FREE

Already a subscriber? Log In