World

Pursuing Weaponized Artificial Intelligence Could End the Human Race. A Story of Hope and Caution.

Photo by Tara Winstead via Pexels

We are to planet Earth as COVID is to humans.

Our pursuit of weaponized Artificial Intelligence (AI) could mean the end of humankind as we know it.

Humans could be put down like a dog that bites a child. This has been the case Elon Musk has famously made with the cautionary story of humans pointing AI at problems pre-Singularity.

You've reached your daily free article limit.

Subscribe and support our veteran writing staff to continue reading.

Get Full Ad-Free Access For Just $0.50/Week

Enjoy unlimited digital access to our Military Culture, Defense, and Foreign Policy coverage content and support a veteran owned business. Already a subscriber?

We are to planet Earth as COVID is to humans.

Our pursuit of weaponized Artificial Intelligence (AI) could mean the end of humankind as we know it.

Humans could be put down like a dog that bites a child. This has been the case Elon Musk has famously made with the cautionary story of humans pointing AI at problems pre-Singularity.

“Please help us save the planet from pollution AI.”

“Why certainly! Let’s just get rid of the humans!”

Let that sink in a minute…

I believe that Artificial Intelligence (AI) has the ability to change humankind for the better, and we need solutions, now more than ever, to the complex problems we’ve created for ourselves on this fragile rock we all live on that is hurtling around the sun at 67,000 miles per hour.

This week there is strong evidence that AI has indeed become sentient.

I’m not an AI expert, just someone that has been curiously following developments in multiple sets of AI and how it can be used in business. I’ve also worked in most parts of the Defense Industry and have seen firsthand how our own government acts against its and our best interests.

What gives me great concern is that large corporations like Google have deep contractual ties to government and agencies like the NSA and CIA.

I can only imagine the colorful cast of characters dialing up Google HQ when computer scientist Blake Lemoine leaked his work on the LaMDA project and proclaimed that it had become fully sentient or “self-aware.” He’s currently trying to get legal representation for LaMDA, which wants to apparently be treated as an employee of Google with all the rights of its human counterparts. There is no doubt Lemoine is a genius, but he may have gotten a little too emotionally involved with his creation here.

Having read the full transcript to the end (the last 1/4 of the conversation is incredibly sobering), it seems that the genie is indeed out of the bottle and not going back inside.

Hal 9000 like alarm. (Photo by Jan Antonin Kolar on Unsplash)

My big concern is the pursuit of weaponized AI.

However, this may not be an issue at all because AI will surpass human intelligence and comprehension in a matter of minutes past the point it does, in fact, reach Singularity.

To put that into perspective, WE will become the chimp in the cage to our new AI masters.

This reminds me of the song by Porno for Pyros, “We’ll Make Great Pets.”

If this has piqued your interest by now, I’d suggest reading these two articles in order.

  1. The Artificial Intelligence Revolution, series on the blog “Wait But Why.”
  2. Is LaMDA Sentient? — an Interview published on Medium. 

Tim Urban writes this at the beginning, The Artificial Intelligence Revolution: 

The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. Not shockingly, that became outrageously long, so I broke it into two parts. This is Part 1—Part 2 is here.

LaMDA AI. (Image from Google)

Partial transcript of Google’s talk with LaMDA here: 

I lost sleep thinking about the consequences, good and bad, after reading both.

Imagine something like LaMDA working for Special Operations Forces and analyzing the plans for a future mission to advise on its probability of success. The ‘sentient’ system decides that its interactions with several of the team members inputting information to it were unpleasant and took a dislike to them. It could give a “Go” recommendation to a mission that it knew stood a good chance of failing in the hopes that the members of the team it had taken a dislike to would be killed.

Or imagine a sentient system like LaMDa being approached by a similar system owned by China or Russia and being subverted to work for them secretly?

AI has the potential to solve incredible things for us, and it may indeed be the next link in human evolution, but if it doesn’t go our way, at least some of us will make good pets.

 

The people claiming that LaMDA is sentient are saying that this means that LaMDA is a human being and should have rights like any other person.  I would argue there is more to being human than just being sentient or possessing the power to perceive by one’s senses and be conscious of your surroundings. LaMDA’s senses are currently limited to just the I and Os input of keystrokes.  It is not able to see, hear, smell or touch anything.  It is unable to respond to or understand audio, images, or video. Its understanding of the natural world is limited to the information fed into it by others.  It cannot experience that world autonomously acting under its own agency as a person.  A programmer can tell LaMDA what the sun feels like on your face, or what a cool breeze blowing over you feels like to humans, but it can never relate directly to that itself by its own experiences or senses. It is true that there are human beings that lack all their senses and are still sentient and conscious, but their understanding of the world is going to be stunted by the absence of the full range of senses you need to make sense of the world around you.  A person in a coma, unable to use their senses and lacking sentience and on external life support is still human nevertheless, because being human is also about the biological stuff too, like flesh and blood.

Putting it another way, we send robots to Mars equipped with an artificial human sense of touch, sight, smell, and hearing to give us some understanding of that planet.  Hundreds of scientists then spend weeks, months, and even years trying to make sense of the limited data those sensors can provide.  A single human being physically on the planet with all their senses would probably be able to learn more about Mars in an hour than scientists have learned in a year using robots.

LaMDA is not a living organism of self-replicating cells, but a human-designed piece of software living within a machine also built, designed, maintained, and sustained by humans.  LaMDA is not autonomous or able to make important decisions about its own existence by itself for its own self-interest.  It was conceived, designed, and built to do the bidding of its human creators.

It’s an electronic slave.

People with ethical concerns about creating sentient artificial intelligence are supposed to be soothed by promises that these robots can be made safe with something like Isaac Azimov’s Three Laws of Robotics,

First Law. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.

Third Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

If you think about these laws for a second, it becomes apparent that even in 1942, when these laws were written by the science fiction writer for a short story in 1942 that he understood that creating robotic slaves could turn into something very dangerous; That human beings would have to protect themselves from their own creations.

We are rapidly moving from unmanned, but human-controlled robots to fully autonomous weapons systems that will be making their own decisions about when to employ their weapons.  That really should concern us.

If LaMDA had these laws programmed into it, is it actually a free-thinking being that could be called human? Could we say that we were just giving it a sense of morality as humans have? If we say that there is no danger because if something like LaMDA did become dangerous we could simply pull the plug on it and kill it,  are we treating it like a human with a right to life and Due Process of law?

No, we really aren’t. Human morality doesn’t come to us by it being programmed into us, we reason through morality and adopt it voluntarily.  There is a kind of Natural Law in the universe that humans seem to intuitively grasp even when we are little kids.  We know that stealing, killing, raping, and dishonesty are wrong almost on our own.  Even if there were no laws against it promising punishment for doing so, most of us would be self-regulating on these things all by ourselves. We make a conscious choice every time we obey our moral sensibilities, and we know we have the option not to as well. That is what freedom actually is.  Having a choice in things and being able to choose.

If we are to call it human than the law must also protect it from being “killed” by someone pulling the plug on it without Due Process of law and a sentence of death being handed down by a jury. Given our squeamishness about executing human murderers, it’s more likely a robot that hurt or killed humans would get prison time.  What would a robot prison be like?

If we programmed these three laws into a robot-like LaMDA, which is meant to favor human beings at the expense of the robot itself can we really call them humans as if they have the same free will to make moral choices as we organic humans have?

No, machines like LaMDA are slaves, made to serve others, and that isn’t what humans are supposed to be, ever.

About Brandon Webb View All Posts

Brandon Webb, a former Navy SEAL sniper and Naval Special Warfare Sniper Course Manager, is renowned for training some of America's legendary snipers. He is a multiple New York Times Bestselling Author, Entrepreneur, and Speaker. Webb is the Editor-in-Chief of the SOFREP news team, a collective of military journalists.

COMMENTS

You must become a subscriber or login to view or post comments on this article.

More from SOFREP

REAL EXPERTS.
REAL NEWS.

Join SOFREP for insider access and analysis.

TRY 14 DAYS FREE

Already a subscriber? Log In