AI Threat, Bad Engineering, or a Misunderstanding?

Photo - AI Threat, Bad Engineering, or a Misunderstanding?
AI and aerospace. What could go wrong? It seems like a lot, even though the dire claims are already being denied.
An intriguing incident unfolded after a thought-provoking debate session at the Royal Aeronautical Society. The event gathered 70 speakers and over 200 delegates from the armed services industry, academia, and the media.

The participants covered a wide array of topics, including Russia’s war against Ukraine, and also focused on a simulated test. As part of it, an AI-enabled drone was tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go assigned to humans. 

However, it went differently than planned. During the training, the drones were reinforced that destruction of the SAM was the top option. As a result, this belief led them to believe that human’s command is of lesser value. So, if a human told the machine not to destroy SAM, the AI would disobey and even kill the human operator.

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,”  Colonel Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, said.

He provided a set of more disturbing facts about the system, saying that people were trying “to talk sense” into AI by telling it that it would lose points for killing the operator. 

“So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” he said.
He added that it’s impossible to have a conversation about AI, intelligence, machine learning, and autonomy while omitting ethics.

Despite the officially published information, which caused a great stir on social media with some people pointing out that this is due to bad engineering rather than AI, it was later claimed that the Department of the Air Force has not conducted any such AI-drone simulations.

Air Force spokesperson Ann Stefanek said, “The colonel's comments were taken out of context and were meant to be anecdotal."

His statements, however, were not removed from the site at the time of writing.

Previously, Gagarin News reported about AI's existential threat.