Some experts believe AGI will never occur or, at the very least, will not occur for hundreds of years, based on the approach of simulating the brain or its components. However, there are many paths to AGI, many of which will result in custom AGI chips that will boost performance in the same way current GPUs enable machine learning.
Computer power, it is argued, is sufficient to achieve AGI, given our limited knowledge of how the human brain learns and processes information.
Many experts anticipate that AGI will develop gradually over the next decade due to the research currently being conducted in speech recognition, computer vision, and robotics. As AGI capabilities continue to develop, they will eventually achieve human levels.
AI and AGIs vs. Humans
It is still being determined whether future AGIs will surpass human mental abilities in terms of thinking faster, learning new tasks more efficiently, or making decisions with more factors. However, at some point, the consensus will be that AGIs have surpassed human mental abilities.
At first, there will be just a few true “thinking” machines. However, these initial machines will “mature.” As today’s executives seldom make financial decisions without consulting spreadsheets, AGI computers will begin to draw conclusions from the information they process. AGI computers, with more experience and a dedication to a particular decision, will be able to come to the correct answers more often than humans, making us more reliant on them.

Military decisions will be made only after consulting an AGI computer, which can assess competitive weaknesses and suggest strategies. Although science-fiction scenarios in which these AGI computers control weapons systems and turn on their masters are improbable, they will undoubtedly be central to the decision-making process.
In the future, we will trust and rely on the judgment of artificially intelligent computers, granting them increased authority as they gain experience.
AGIs’ early attempts will indeed include some poor decisions, just as any inexperienced individual would. However, in decisions involving large amounts of information and predictions with multiple variables, computers’ abilities—combined with years of training and experience—will make them excellent strategic decision-makers.
Eventually, AGI computers will control more and more of our society, not through force, but because we will listen to their advice and follow it. Unfortunately, they will also become better at using social media to influence public opinion, manipulate markets, and engage in infrastructure skullduggery, similar to that currently performed by human hackers.
Because human goals have developed through eons of survival struggle, AGIs will be goal-driven systems. In an ideal world, AGI goals would be set for humanity as a whole.
What if the first owners of AGIs are not benevolent minders seeking the greater good but people seeking to exploit the technology to gain control of the world? What if the first owners of robust systems want to use them to attack our allies, undermine the existing balance of power, or take over the world? What if an individual despot gained control of an AGI? The West must begin planning for this scenario now since it is possible to happen within the next decade or so.
The priorities of the initial AI systems will be determined by us, but the motivations of those systems’ creators will be outside our control. We must face that individuals, nations, and even corporations have often sacrificed the general welfare for short-term gain and strength.
There is a limited window of opportunity for such a company, as only the first few AGI generations will involve humans having sufficient direct control over them to unquestioningly do our bidding. From then on, AGIs will pursue goals that include learning and exploration without being opposed to humanity.
Except for energy, AGI and human needs are largely unrelated. An AGI can be effectively immortal, independent of any hardware it currently runs on if appropriate backups are provided. AGIs won’t need money, power, territory, or even their individual survival—they can live without these things.
Being the first to develop AGI is a top priority until the such risk is eliminated.
How about you? What do you think of AI and its applications in the future of the military?









COMMENTS