The Pentagon has adopted new ethical principles as it plans to modify its use of artificial intelligence technology on the battlefield. According to the new policies, people require to “exercise appropriate levels of judgment and care” while deploying and using AI systems, such as those that scan aerial imagery to look for targets.
The defence body further states, automated systems made decisions should be “traceable” and “governable.” The director of the Pentagon’s Joint Artificial Intelligence Center, Air Force Lt. Gen. Jack Shanahan, said, “There has to be a way to disengage or deactivate” them, in case they are demonstrating unintended behaviour.
Shanahan further stated, the guidance also helps secure American technological advantage as China and Russia pursue military AI with little attention paid to ethical concerns.
The latest decision of the Pentagon to accelerate its AI capabilities has triggered a conflict among tech companies regarding a $10 billion cloud computing contract known as the Joint Enterprise Defense Infrastructure, or JEDI. In October, Microsoft won the deal, but that could not be started on the 10-year project, project because Amazon sued the Pentagon, arguing that President Trump’s antipathy towards Amazon and its CEO Jeff Bezos hurt the company’s chances at winning the bid.
Humans should be in control of the automated weapons, said an existing 2012 military directives, but did not address the broader address of AI. The new U.S. principles are meant to guide both combat and non-combat applications, from intelligence-gathering and surveillance operations to predicting maintenance problems in planes or ships.
Last year, the Defense Innovation Board, a group led by former Google CEO Eric Schmidt, made the same kind of recommendations. Lucy Suchman, an anthropologist who studies the role of AI in warfare, “I worry that the principles are a bit of an ethics-washing project.” He further added, “The word ‘appropriate’ is open to a lot of interpretations.”