Technology

AI’s Next Phase—Learning the Human Touch

The next phase of AI is the human touch.

FutureAI, an early-stage technology company working on artificial intelligence algorithms, is led by Charles Simon. He co-authored the book “Will Computers Revolt? Preparing for the Future of Artificial Intelligence” and created Brain Simulator II, an AGI research software platform, as well as Sallie, a prototype software and artificial entity that learns in real-time via vision, hearing, speaking, and locomotion.

AGI (Artificial General Intelligence) is an intelligent agent with the same characteristics as the human brain, including common sense, prior knowledge, transfer learning, abstraction, and causality. The human ability to generalize from sparse or limited input is particularly interesting.

You've reached your daily free article limit.

Subscribe and support our veteran writing staff to continue reading.

Get Full Ad-Free Access For Just $0.50/Week

Enjoy unlimited digital access to our Military Culture, Defense, and Foreign Policy coverage content and support a veteran owned business. Already a subscriber?

The next phase of AI is the human touch.

FutureAI, an early-stage technology company working on artificial intelligence algorithms, is led by Charles Simon. He co-authored the book “Will Computers Revolt? Preparing for the Future of Artificial Intelligence” and created Brain Simulator II, an AGI research software platform, as well as Sallie, a prototype software and artificial entity that learns in real-time via vision, hearing, speaking, and locomotion.

AGI (Artificial General Intelligence) is an intelligent agent with the same characteristics as the human brain, including common sense, prior knowledge, transfer learning, abstraction, and causality. The human ability to generalize from sparse or limited input is particularly interesting.

An AGI won’t require money, territory, power, or even ensure their individual survival, as humans do.

The Department of Defense is broadly exploring how artificial intelligence can be used in the military to improve touch recognition. This is important for several reasons:

  1. It can help identify friendly troops and civilians from enemy combatants and civilians.
  2. It can help improve accuracy when targeting enemies.
  3. It can help reduce the number of casualties sustained by our troops.

One way AI can be used to improve touch recognition is through the use of thermal imaging. Thermal imaging can detect differences in heat signatures, which can be used to identify targets. For example, a soldier’s body heat will differ from an enemy combatant’s. AI can also be used to improve touch recognition through the use of facial recognition software. Facial recognition software can be used to identify people by their facial features. This is important because it can distinguish between enemy combatants and civilians.

(Source: mikemacmarketing/Wikimedia)

AI can also be used to improve touch recognition through the use of machine learning algorithms. Machine learning algorithms can be used to “learn” how to distinguish between different targets. For example, a machine learning algorithm might be trained on a dataset containing images of friendly troops and enemy combatants. The algorithm will learn to distinguish between the two based on various characteristics, such as body shape, clothing, and facial features.

AI has already been shown to be effective at improving touch recognition. For example, research conducted by scientists at MIT shows that current machine learning algorithms still need to be able to reliably distinguish between different types of targets (e.g., civilians vs. combatants) using facial features alone. However, future iterations of these algorithms will likely become more accurate.

“The results were really striking. In fact, the first time we did this experiment, we thought it was a bug. It took us several weeks to realize it was a real result because it was so unexpected,” said Xavier Boix, a research scientist in the Department of Brain and Cognitive Sciences (BCS) and the Center for Brains, Minds, and Machines (CBMM), and senior author of the paper.

AI is turning into a big player in every industry. Of course, it’s up to us whether we acknowledge it, but every time we Google something or ask Siri a question, we use AI.

Using AI in defense systems is becoming as ubiquitous as it is controversial. The Department of Defense, like its Chinese and Russian counterparts, is investing billions of dollars in developing and integrating AI into defense systems. Artificial general intelligence is one of the future technologies that DoD is now embracing in light of its initiatives that envision future technologies.
An intelligent agent can comprehend any intellectual job in the same way as a human being.

Some experts believe AGI will never occur or, at the very least, will not occur for hundreds of years, based on the approach of simulating the brain or its components. However, there are many paths to AGI, many of which will result in custom AGI chips that will boost performance in the same way current GPUs enable machine learning.

Computer power, it is argued, is sufficient to achieve AGI, given our limited knowledge of how the human brain learns and processes information.

Many experts anticipate that AGI will develop gradually over the next decade due to the research currently being conducted in speech recognition, computer vision, and robotics. As AGI capabilities continue to develop, they will eventually achieve human levels.

AI and AGIs vs. Humans

It is still being determined whether future AGIs will surpass human mental abilities in terms of thinking faster, learning new tasks more efficiently, or making decisions with more factors. However, at some point, the consensus will be that AGIs have surpassed human mental abilities.

At first, there will be just a few true “thinking” machines. However, these initial machines will “mature.” As today’s executives seldom make financial decisions without consulting spreadsheets, AGI computers will begin to draw conclusions from the information they process. AGI computers, with more experience and a dedication to a particular decision, will be able to come to the correct answers more often than humans, making us more reliant on them.

(Source: David S. Soriano/Wikimedia)

Military decisions will be made only after consulting an AGI computer, which can assess competitive weaknesses and suggest strategies. Although science-fiction scenarios in which these AGI computers control weapons systems and turn on their masters are improbable, they will undoubtedly be central to the decision-making process.

In the future, we will trust and rely on the judgment of artificially intelligent computers, granting them increased authority as they gain experience.

AGIs’ early attempts will indeed include some poor decisions, just as any inexperienced individual would. However, in decisions involving large amounts of information and predictions with multiple variables, computers’ abilities—combined with years of training and experience—will make them excellent strategic decision-makers.

Eventually, AGI computers will control more and more of our society, not through force, but because we will listen to their advice and follow it. Unfortunately, they will also become better at using social media to influence public opinion, manipulate markets, and engage in infrastructure skullduggery, similar to that currently performed by human hackers.

Because human goals have developed through eons of survival struggle, AGIs will be goal-driven systems. In an ideal world, AGI goals would be set for humanity as a whole.

What if the first owners of AGIs are not benevolent minders seeking the greater good but people seeking to exploit the technology to gain control of the world? What if the first owners of robust systems want to use them to attack our allies, undermine the existing balance of power, or take over the world? What if an individual despot gained control of an AGI? The West must begin planning for this scenario now since it is possible to happen within the next decade or so.

The priorities of the initial AI systems will be determined by us, but the motivations of those systems’ creators will be outside our control. We must face that individuals, nations, and even corporations have often sacrificed the general welfare for short-term gain and strength.

There is a limited window of opportunity for such a company, as only the first few AGI generations will involve humans having sufficient direct control over them to unquestioningly do our bidding. From then on, AGIs will pursue goals that include learning and exploration without being opposed to humanity.

Except for energy, AGI and human needs are largely unrelated. An AGI can be effectively immortal, independent of any hardware it currently runs on if appropriate backups are provided. AGIs won’t need money, power, territory, or even their individual survival—they can live without these things.

Being the first to develop AGI is a top priority until the such risk is eliminated.

How about you? What do you think of AI and its applications in the future of the military?

About SOFREP News Team View All Posts

The SOFREP News Team is a collective of professional military journalists. Brandon Tyler Webb is the SOFREP News Team's Editor-in-Chief. Guy D. McCardle is the SOFREP News Team's Managing Editor. Brandon and Guy both manage the SOFREP News Team.

COMMENTS

You must become a subscriber or login to view or post comments on this article.

More from SOFREP

REAL EXPERTS.
REAL NEWS.

Join SOFREP for insider access and analysis.

TRY 14 DAYS FREE

Already a subscriber? Log In