- The Washington Times - Monday, September 16, 2019

PITTSBURGH — The Army’s Artificial Intelligence Task Force, with headquarters in the heart of the Rust Belt, is a crucial component of a grand Pentagon plan to incorporate robots and machine learning into 21st-century warfare.

For skeptics, the task force is the concrete embodiment of how the U.S. is headed down a rocky, uncertain road that could put humanity itself in danger.

The raging debate over AI, including its implications for the human race and the morality of its use in warfare, has divided the U.S. from some traditional allies. It also is fueling a growing band of activists who warn that “killer robots” are on the horizon of a military that has no comprehensive plan to stop them or understand their implications.



Deep, philosophical questions about the ramifications of AI technology — who is responsible for writing ethical guidelines, to what extent must humans remain in the loop, how much easier is it to fight a war fought (initially) by machines, and who bears the blame if a robot or drone ultimately targets humans — are just beginning to be confronted in a systematic way.

So far, global rules of the road have hit impenetrable roadblocks. A multinational effort to ban “lethal autonomous weapons systems” fell flat again last month during a high-level U.N. gathering in Geneva.

Military and diplomatic leaders say that with AI’s combat applications in their infancy, it would be foolish to ban them preemptively. The nation’s national security, they say, could ultimately suffer, and they reject the notion that AI technology ultimately will displace humans and bring about Terminator-like, apocalyptic battlefields.

But Army officials readily acknowledge the deep ethical concerns that come with the work conducted at the AI team’s headquarters, where military personnel collaborate with Carnegie Mellon University researchers on a host of cutting-edge projects.

In an exclusive interview with The Washington Times, Army AI Task Force Deputy Director Col. Doug Matty stressed that the ethics of artificial intelligence — and the real potential that its capabilities could fall into the wrong hands — are pressing considerations as the technology moves inexorably ahead.

“You have to have ethical considerations both from concept all the way through development, all the way to fielding,” he said.

“It’s omnipresent,” he said in reference to the Pentagon-wide mandate that the ethics of AI warfare be kept foremost in mind. “If you do it as an afterthought, then you’ll have a gap where it’ll allow exploitation from the friendly side, or potentially others.”

Inside the Pentagon, the ethics of AI development are a top priority. The Defense Department’s Joint Artificial Intelligence Center (JAIC), a Pentagon-wide project that encompasses AI initiatives in all corners of the military, recently announced that it would hire its first “AI ethicist.”

The job, officials say, will be to develop a comprehensive ethics policy that addresses a host of key questions around the legality, morality and practicality of the use of weapons and vehicles that require minimal human involvement in the business of potentially killing an enemy.

“We are going to bring in someone who will have a deep background in ethics, and then the lawyers within the department will be looking at how we actually bake this into the Department of Defense,” Air Force Lt. Gen. Jack Shanahan, JAIC director, told reporters this month.

At the core of the cutting-edge projects at the Army’s AI center in Pittsburgh is a focus on how they will ultimately help soldiers in the field, officials say. The technology, they say, cannot be looked at in a vacuum and instead should be viewed through a national security lens.

“To separate the technology from the mission is a misnomer,” Col. Matty said. “The capability is developed to support the mission.”

Growing fears

But military officials also are keenly aware of the deeper issues at play. In the Navy, for example, leaders don’t deny uneasiness with their research into autonomous weapons.

“Trust is something that is difficult to come by with a computer, especially as we start working with our test and evaluation community,” Steve Olson, deputy branch head of the Navy’s mine warfare office, told the publication Defense News.

“I’ve worked with our test and evaluation director, and a lot of times it’s: ‘Hey, what’s that thing going to do?’ And I say: ‘I don’t know, it’s going to pick the best path,’” he said. “And they don’t like that at all because autonomy makes a lot of people nervous. But the flip side of this is that there is one thing that we have to be very careful of, and that’s that we don’t overtrust. … The last thing we want to see is the whole ‘Terminator going crazy’ scenario.”

Critics of the AI revolution argue that world governments, led by the U.S., must adopt a sweeping treaty to govern the use of robots in combat. Leading international powers gathered in Geneva last month to discuss that very topic, and a growing group of nations are renewing a push for a worldwide ban on lethal autonomous weapons systems.

Jordan is the latest of at least 29 countries that have signed on to the idea.

But the world’s leading military powers, including the U.S., Russia and Britain, have resisted such a move. Critics argue that those countries are paving the way for a grim future.

Ultimately, they say, the tide of public opinion will turn against AI and its applications for war.

“Russia and the U.S. are continuing their losing fight to prevent the creation of the inevitable treaty that’s coming for killer robots,” said Mary Wareham, advocacy director of the arms division at Human Rights Watch and coordinator of the Campaign to Stop Killer Robots. “Nations cannot allow their ambition and desire to create a new treaty on these weapons systems to be limited by these military powers.”

The Trump administration remains opposed to a full ban, though officials stressed that Washington would not object to the drafting of international principles to govern the military use of AI.

“These guiding principles included statements that human responsibility for decisions on the use of weapons systems must be retained, since accountability cannot be transferred to machines, and that human-machine interaction,” a State Department official told The Washington Times.

Those principles, the officials added, “may take various forms and be implemented at various stages of the life cycle of a weapon” and should ensure that any use of AI weaponry “is in compliance with applicable international law, in particular international humanitarian law.”

• Ben Wolfgang can be reached at bwolfgang@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide