Military-grade artificial intelligence will soon be the norm in warfare. And even if the international community agrees to ban it, it may not be enough. The use of Artificial Intelligence (AI) in warfare could have devastating consequences for humanity. AI is now being integrated into military operations to lower human casualties, but what are the risks? What are the implications? How can we be sure that the military-grade AI remains under control? Do we really want to let machines determine who lives and who dies? Maybe…

There are many questions about how AI will affect us. Still, one thing is clear: weaponized military AI is dangerous. Especially in the hands of current Joint Chiefs of Staff leading the Department of Defense, who has trouble downloading the latest smartphone update. Think about that for a moment.

Weaponized military AI is a real threat. It could be used for unethical purposes like causing harm, torture, or violating human rights. The international community is currently debating whether or not the use of military AI should be banned, but that ship has long sailed.

Another big problem, to the smartphone dig above, is that Department of Defense leadership has been accused of not understanding military-grade artificial intelligence. This is because they rely entirely too much on experts instead of conducting research themselves.

This lack of knowledge leaves them at a disadvantage to the broader organization that needs to make important decisions about using AI for military systems.

 

What is Military-Grade AI?

With artificial intelligence, there are no limits to what can be achieved. AI provides the capability to make decisions and take action without human intervention. As a result, it provides many perceived military advantages.

Artificial intelligence has massively evolved in recent years to the point that it can now be weaponized and used for warfare.