Article by NEWSREP guest author Sean McWillie —
In May 2018, Defense Secretary Mattis sent a still unreleased memo to President Trump, urging a national level strategy for artificial intelligence (AI). This opaqueness at the national level regarding advances in artificial intelligence indicates two problems: an over-estimation of the harms and benefits such technology can provide, and a lack of expert understanding across all echelons. Another way to look at it is that this dearth of openness on AI lends itself to both doom-and-gloom as well as rose-colored glasses about what kind of change AI portends to unleash, while people who could make marked improvements get left out of the conversation entirely.
The fact that we basically know only that such a memo exists brings with it a solid, sobering fact: while AI rapidly progresses, no one seems to know what to do about it beyond pay attention to it because it seems important. AI has come out of a dormant period, called the ‘AI Winter’, in which funding, interest, and research all but dried up. Recently, its revitalization comes in the form of machine learning (ML). Machine learning allows non-statisticians to do statistical modeling by training computational agents (hence “machine learning”) on known connections between points in data sets, to draw conclusions regarding unknown connections.
You've reached your daily free article limit.
Subscribe and support our veteran writing staff to continue reading.
Article by NEWSREP guest author Sean McWillie —
In May 2018, Defense Secretary Mattis sent a still unreleased memo to President Trump, urging a national level strategy for artificial intelligence (AI). This opaqueness at the national level regarding advances in artificial intelligence indicates two problems: an over-estimation of the harms and benefits such technology can provide, and a lack of expert understanding across all echelons. Another way to look at it is that this dearth of openness on AI lends itself to both doom-and-gloom as well as rose-colored glasses about what kind of change AI portends to unleash, while people who could make marked improvements get left out of the conversation entirely.
The fact that we basically know only that such a memo exists brings with it a solid, sobering fact: while AI rapidly progresses, no one seems to know what to do about it beyond pay attention to it because it seems important. AI has come out of a dormant period, called the ‘AI Winter’, in which funding, interest, and research all but dried up. Recently, its revitalization comes in the form of machine learning (ML). Machine learning allows non-statisticians to do statistical modeling by training computational agents (hence “machine learning”) on known connections between points in data sets, to draw conclusions regarding unknown connections.
Use in national security has unique challenges such as cleaning data, reducing costs, increasing accuracy and so on, that ML scientists do not need to worry about. But no one in their right mind would sleep well at night knowing that their loved ones boarded a plane after being screened by a lowest-bidder ML contractor, or that they carpet bombed a village because a computer said there was a 60% likelihood of it harboring a terrorist cell. Yet these are extant issues with machine learning that researchers have yet to overcome.
This unreleased memo is just another glaring example of a four-fold disconnect between the general population’s assumptions about machine learning, what academics describe in their literature, how the press reports on it, how those actually implement it, and how the press reports on it. It is safe to assume that institutions that use machine learning do not hire subject matter experts (such as in ethics or anti-terror), however it would have been useful to ask more ethical considerations. One example is if a machine learning agent produces a negative outcome, i.e., one that can lead to tort claims or liability under UCMJ, who is the responsible agent?
If a life is on the line, and a machine learning agent fails to flag a terrorist based on their known activity and social connections, who is to blame? It may well be the case that such life-and-death situations are too dire to offload onto machines at this time. Seeing that one of the core procedures of machine learning is training on known connections, connections made by actual humans, the notion of a perfectly rational and infallible machine just doesn’t pass muster.
An even graver issue is that machine learning is reactive. Lessons learned yesterday become tomorrow’s “the machine didn’t catch it.” This is an example of positive bias: machine learning engineers run the risk of trying to catch people using known tactics. This rewards innovation. After all, no one attempts to hide explosives or weapons in their shoes. Do they?
Luigi Mangione, Suspect in UnitedHealthcare CEO Murder, Held in Custody in PA
Inside Delta Force: America’s Most Elite Special Mission Unit
Navy SEALs To Rally Behind Pete Hegseth in a March on Washington
Head Coast Guard Chaplain Removed Due To Knowledge of Sexual Misconduct
What Assad’s Downfall Means for Syria and the Middle East
Join SOFREP for insider access and analysis.
TRY 14 DAYS FREEAlready a subscriber? Log In
COMMENTS
You must become a subscriber or login to view or post comments on this article.