But there’s another big problem with DOD tech right now, and we’ll talk about it at the end of this report.
Enter Anduril and Palmer Luckey?
Who is ANDURIL?

(Read the full white paper here.)
Who Is Palmer Luckey?
Remember that teen kid who built a garage VR headset called Oculus?
Palmer Luckey, the son of a car salesman, was homeschooled by his mother in Long Beach, California (Go home school!).
Lucky developed a new model for VR headsets as a seventeen-year-old and, four years later, sold Oculus Rift to Zuck’s Facebook for over $3 billion.
Anduril Industries is the latest venture of Palmer Luckey, the now 26-year-old entrepreneur. He began work on Project Maven last year, along with efforts to support the Defense Department’s newly formed Joint Artificial Intelligence Center.
Read his Wiki Bio here.
The Future of AI & Project Maven?
The US military is one of the largest users of AI technology today, and that should scare the hell out of all of us. Just look at who was in charge of the Afghanistan pullout and realize that the same leadership is in charge of weaponized AI.
What could possibly go wrong?
The Pentagon has called Project Maven “the most ambitious machine learning effort yet undertaken by the US government.”
Of course, they would.
Project Maven uses machine learning to automatically identify objects in drone footage. It can do this with an incredible degree of accuracy—up to 95% depending on the complexity of the scene.
This system is a breakthrough for America’s military. Manually analyzing video footage would take many hours, and it’s difficult for humans to reliably identify all of the objects in a complex scene. Project Maven reduces this workload by an incredible amount.
But the benefits don’t end there: Project Maven can also use its data and algorithms to track vehicles and even individuals and control the drone swarm. Coming to a battlefield soon near you.
Conclusion
Artificial Intelligence has changed the way we live and work in ways most people don’t realize and will only become more prevalent in the future.
It has been predicted that within the next 20 years, AI will outperform humans in every cognitive task we now do on the planet. So, how can we responsibly develop and integrate AI on the battlefield?
The major problem we see that we hinted at earlier is that the current (at the top) Department of Defense leadership is weak and barely readable.
And if they can’t prevent a guy in a Chewbacca from storming Capital hill or manage the Afghanistan withdrawal (who took responsibility?), how are they qualified to unleash weaponized Artificial Intelligence on the rest of the world.
That 95% rate of accuracy is pretty good, the problem is the other 5%. That is where the collateral damage would happen in the form of killing the wrong people. We may never be able to attain 100% perfect accuracy from AI in locating, tracking, and targeting terrorists for one very simple reason: The creators of AI are human and not perfect either. When an AI-directed weapon does kill the wrong person, it will be very easy to shift the blame to the robot itself saying it got confused or mistook or misunderstood something. The truth is though that everything that AI program does, or reacts to, or decides to do was programmed by a flawed human.
This is where the fault would truly lie and we should never lose sight of that. We may be able to create perfect AI-directed drone weapons someday, but they will be sent on their missions by imperfect, often badly flawed people.








COMMENTS