Google has discarded its self-imposed ban on using AI in weapons, a step that simultaneously drew praise and criticism, marked a new entrant in a hot field, and underscored how the Pentagon—not any single company—must act as the primary regulator on how the U.S. military uses AI in combat.
On Tuesday, Google defended its decision to strip its AI-ethics principles of a 2018 prohibition against using AI in ways that might cause harm.
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” it reads.
The move is a long-overdue correction to an overcorrection, one person familiar with the company’s decision-making process told Defense One.
That “overcorrection” was Google’s 2018 decision not to renew its contract to work on the AIr Force’s Maven project. At the time, Maven was the Pentagon’s flagship AI effort: a tool that vastly reduced the time needed to find useful intelligence in hours and hours of drone-video footage. Within defense circles, the program wasn’t controversial at all. Military officials describing the program always said Maven’s primary purpose was to enable human operators, especially in performing time-sensitive tasks under enormous cognitive burdens to understand large data volumes. Many praised the effort as pointing the way toward other AI-powered decision aids.
But Google was less than perfectly transparent about its involvement in the project, particularly with its workforce, which, in part, led to an employee revolt in the form of mass resignations and protests. The company soon dropped the contract—but at the cost of competing for other important Pentagon IT contracts.
The episode catalyzed the 2019 drafting of the Defense Department’s own AI ethics principles, which were far more comprehensive than those of most Silicon Valley companies. They aimed to reassure the American tech community and international partners that the Pentagon could lead in the ethical use of AI in combat.
The person familiar with the decision-making process at Google said that this week’s announcement was driven by the rapidly shifting landscape around military use of AI.
“The primary driver of this decision was to ensure Google remains a leading voice in responsible AI. The technology frontier and business landscape is totally altered since 2018, so it was time to turn the page on Maven once-and-for-all,” the person said.
Not everyone is pleased, including some Google employees and human-rights groups.
But Greg Allen, director of the Wadhwani AI Center at the Center for Strategic and International Studies, told Defense One, “This is a fabulous decision and one that Google should have made years ago. Helping to protect America is ethical.”
Google is joining a crowded field of AI-focused firms that are increasingly collaborating to shape Pentagon AI use. But Google brings with it unique cloud and AI capabilities, which are part of the reason it was chosen for Project Maven in the first place. Google’s decision, and the emergence of other rival players in the AI defense space, shows how much sentiment in Silicon Valley has changed to allow collaboration with the military.
Syracuse University professor Johannes Himmelreich, who researches the ethics of artificial intelligence and political philosophy and co-edits the Oxford Handbook of AI Governance, said in an email, “Military and surveillance tech aren’t bad or unethical as such. Instead, supporting national security and doing so in the right way is incredibly important. And supporting national security is, in fact, arguably the ethical thing to do.”
Google’s original ban was “probably was overly zealous to begin with,” Himmelreich said.
But Google’s decision also highlights the importance of the Defense Department as the ultimate monitor of how it uses AI in warfare. Whether that means changes to the AI ethics principals under the new administration, or as China and Russia rapidly advance their own capabilities, is an open question.
One AI entrepreneur suggested that China was already ahead.
“We don’t really have industrial policy,” Noosheen Hashemi, CEO of health-app maker January AI, said Thursday at the Globsec Transatlantic Forum. “And, of course, [China’s] AI is all in the military. They have an AI military doctrine and they already have incorporated AI into at least 300 different programs in their military. And we don’t have an AI military doctrine, which is really unfortunate because, you know, we have a lot of bureaucracy, slow approval cycles, but we have insisted on having a human on the loop, and they have not insisted on that. So they have set themselves up for autonomous warfare, which will be faster.”
Read the full article here