Google is committing to not using artificial intelligence for weapons or surveillance after employees protested the company’s involvement in Project Maven, a Pentagon pilot program that uses artificial intelligence to analyze drone footage. However, Google says it will continue to work with the United States military on cybersecurity, search and rescue, and other non-offensive projects.
Google CEO Sundar Pichai announced the change in a set of AI principles released today. The principles are intended to govern Google’s use of artificial intelligence and are a response to employee pressure on the company to create guidelines for its use of AI.
Employees at the company have spent months protesting Google’s involvement in Project Maven, sending a letter to Pichai demanding that Google terminate its contract with the Department of Defense. Several employees even resigned in protest, concerned that Google was aiding the development of autonomous weapons systems.
Google will focus on creating “socially beneficial” AI, Pichai said, and avoid projects that cause “overall harm.” The company will accept government and US military contracts that do not violate its principles, he added.
“How AI is developed and used will have a significant impact on society for many years to come,” Pichai wrote. “These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.”
The AI principles represent a reversal for Google, which initially defended its involvement in Project Maven by noting that the project relied on open-source software that was not being used for explicitly offensive purposes. A Google spokesperson did not immediately respond to a request for comment on the new ethical guidelines.
more at gizmodo.com
Ask me anything
Explore related questions