Google pledges not to develop AI weapons, but says it will still work with the military
https://www.theverge.com/2018/6/7/17439310/google-ai-ethics-principles-warfare-weapons-military-project-maven
Google has released a set of principles to guide its work in artificial intelligence, making good on a promise to do so last month following a months-long controversy over its involvement in a Department of Defense drone project. The document, titled “AI at Google: our principles” and published today on Google’s primary public blog, sets out objectives the company is pursuing with AI, as well as those applications it refuses to participate in. It’s authored by Google CEO Sundar Pichai.
Notably, Pichai says his company will never develop AI technologies that “cause or are likely to cause overall harm”; involve weapons; are used for surveillance that violates “internationally accepted norms”; and “whose purpose contravenes widely accepted principles of international law and human rights.” The company’s main focuses for AI research are to be “socially beneficial”; “avoid creating or reinforcing unfair bias”; be built and tested safely; be accountable to human beings and subject to human control; to incorporate privacy; “uphold high standards of scientific excellence”; and to be only used toward purposes that align with those previous six principles.