ARTIFICIAL INTELLIGENCE AND OUR FUTURE

 

WASHINGTON EXAMINER

Embracing AI: How thinking weapons will simplify — and vastly complicate — future warfare

  Jamie McIntyre is the Washington Examiner’s senior writer on defense and national security. His morning newsletter, “Jamie McIntyre’s Daily on Defense,” is free and available by email subscription at dailyondefense.com. | November 14, 2019

For the Pentagon, it was an ominous glimpse into the future.

The date was July 11, 2014, and Ukrainian forces assembled about five miles from the Russian border in southeastern Ukraine were preparing for a final push to the border.

As Defense Secretary Mark Esper recounted to a forum sponsored by the National Security Commission on artificial intelligence this month, the Ukrainian troops, flushed with recent battlefield success against Russian-backed separatists, were feeling confident.

Suddenly, they heard the hum of Russian drones overhead, followed quickly by cyberattacks that jammed their communications, blinding their command and control systems.

Then a devastating fusillade of Russian artillery fire rained down on them, and in a matter of minutes, dozens of Ukrainian soldiers were killed, hundreds more wounded, and most of their armored vehicles destroyed.

The Ukrainian offensive was stopped dead in its tracks.

“The world was quickly awakened to a new era of warfare advanced by the Russians,” Esper said. “It’s clear the threats of tomorrow are no longer the ones we have faced and defeated in the past.”

Fast forward five years to today when rapid advances in artificial intelligence technology, or AI, foreshadow a grave new world of thinking machines and killer robots that will change the nature of modern warfare as profoundly as smart bombs and GPS did during the 1991 Persian Gulf War nearly three decades ago.

“Whichever nation harnesses AI first will have a decisive advantage on the battlefield for many, many years,” said Esper, who has made accelerating AI research and development a top Pentagon priority. “We have to get there first.”

Simply defined, artificial intelligence is the ability of computer systems to solve problems and perform tasks that would otherwise require human intelligence.

AI systems don’t just follow a program of “if x, then y”: They analyze large amounts of data, perform millions of calculations, and learn from the results, actually improving their thought processes so they can make higher quality decisions at a greater speed than humans.

AI will soon pervade every aspect of our lives. It will enable self-driving cars, spot forest fires, analyze facial expressions of job seekers, and sadly, the era of human chess champions.

And its applications to warfighting are already evident.

In future wars, the side with the best algorithm will win, says Air Force Lt. Gen. Jack Shanahan, the director of the Pentagon’s Joint Artificial Intelligence Center.

Armed aerial drones and unmanned warships already can perform complicated tasks beyond the physical and mental capabilities of humans.

The Pentagon’s National Defense Strategy, updated in 2018, calls for the rapid application of advanced autonomous systems employing artificial intelligence and machine learning to gain competitive military advantages over China and Russia.

China already declared its intention to be the world leader in AI by 2030, and similarly, Russia has called AI “the future of humanity” and sees the development of superhuman intelligence as the key to the world.

According to the Pentagon, Chinese companies are already exporting to the Middle East advanced military aerial drones, which are marketed as capable of conducting lethal, targeted strikes.

And the People’s Liberation Army of China is hoping to leapfrog over America’s superior naval power with a new generation of low-cost, long-range autonomous and unmanned submarines.

The very thought of a drone scanning the battlefield and employing artificial intelligence to identify and destroy an enemy target without human control evokes a chilling image of Terminator-style killing machines and has prompted about two dozen countries around the world to propose banning autonomous lethal weapons as inhumane, putting them in the same class as chemical munitions, incendiary weapons, blinding lasers, and exploding bullets.

The United States, along with Russia, Israel, and other countries investing in robots and artificial intelligence, opposes a ban, arguing that there’s no practical way to freeze what may be one of the significant technological advances of the 21st century.

“The question is not where AI will be used by militaries around the world,” says Esper. “The real question is whether we let authoritarian governments dominate AI, and by extension the battlefield, or whether industry, the United States military and our partners can work together to lead the world in responsible AI research and application.”

To that end, the Defense Innovation Board, a Pentagon advisory panel that includes executives from Google, Microsoft, and Facebook, proposed a set of five ethical guidelines for how AI-enabled weapons should be designed and employed on the battlefield.

The prime directive is the idea that humans must be in the loop and “should exercise appropriate levels of judgment” while remaining “responsible for the development, deployment, use, and outcomes of AI systems.”

But some computer ethicists worry that in the same way humans in unfamiliar territory drive whichever way their GPS app dictates, when overwhelmed with complex decisions, they will tend to default to the superior intellect of the computer.

“Intelligence is really power, and if you build something more powerful than yourself, how are you going to be sure it never has any power over you, ever?” asks Stuart Russell, a professor of computer science at the University of California, Berkeley.

“At some point, we have to expect that artificial intelligence is going to overtake human capabilities in general,” Russell said in a BBC interview this month. “And I believe we are completely unprepared for that to happen.”

“While technology is constantly changing, our commitment to the law, to ethics, and to duty does not,” counters Esper. “We will ensure that we develop this technology in ways that uphold our values and advance security, peace, and stability at the same time.”

In its 2020 budget, the Pentagon requested almost $1 billion for artificial intelligence and $3.7 billion for the development of autonomous systems, including a nearly tenfold increase in spending on large unmanned surface vehicles for the Navy of the future, $447 million up from $49 million in 2019.

But Democratic Sen. Chuck Schumer, the minority leader, argues the U.S. should be spending 100 times more, or $100 billion, to avoid falling behind the Chinese.

“We will do better than the Chinese government, dollar-for-dollar, in investing in AI,” Schumer said at the same conference attend by Esper. “If they outspend us three, four, five to one, which they’re doing now, we’ll fall behind in five years or 10 years, and we will rue the day.”

But the money for new AI research and development is one of the initiatives currently stalled by the failure of Congress so far to pass the fiscal year 2020 budget.

 

 
Share

Leave a Reply

Search All Posts
Categories