Google appoints an “AI council” to head off controversy, but it proves controversial
Developing and commercializing artificial intelligence has proved an ethical mine field for companies like Google. The company has seen its algorithms accused of perpetuating race and gender bias and fueling efforts to build autonomous weapons.
The search giant now hopes that a team of philosophers, engineers, and policy experts will help it navigate the moral hazards presented by artificial intelligence without press scandals, employee protests, or legal trouble.
Kent Walker, Google’s senior vice president for global affairs and chief legal officer, announced the creation of a new independent body to review the company’s AI practices at EmTech Digital, an AI conference in San Francisco organized by MIT Technology Review.
Sign up for the The Algorithm
Artificial intelligence, demystified
Walker said that the group, known as the Advanced Technology External Advisory Council (ATEAC), would review the company’s projects and plans and produce reports to help determine if any of them contravene the company’s own AI principles. The council will not have a set agenda, Walker said, and it would not have the power to veto projects itself. But he said the group’s reports “will help keep us honest.”
The first ATEAC will feature a philosopher, an economist, a public policy expert, and several researchers on data science, machine learning, and robotics. Several of those chosen actively research issues such as algorithmic bias. The full list is as follows: Alessandro Acquisti, Bubacarr Bah, De Kai, Dyan Gibbens, Joanna Bryson, Kay Coles James, Luciano Floridi, and William Joseph Burns.
But it is for tech companies to prove they are sincere about ethical concerns. The announcement has already provoked a backlash from some AI experts who question the inclusion of Gibbens and James.
The former is the founder and CEO of a drone company, a choice that seems tone deaf after Google faced an employee backlash and a storm of negative press for its involvement in Maven, a project to supply cloud AI to the US Air Force for the analysis of drone imagery. The fallout prompted Google to announce a set of AI principles in the first place. The latter is the president of the Heritage Foundation, a conservative think tank that has been accused of, among other things, spreading misinformation about climate change.
The controversial announcement comes amid a series of scandals that Google and other big tech companies have faced related to the development and use of artificial intelligence. For example, the algorithms used for face recognition and filtering job applicants have been shown to exhibit racial bias.
Walker said on stage that Google already vets its AI projects carefully. He noted that the company has chosen not to supply face recognition technology over fears it could be misused. In another instance, he said the company had chosen to release a lip-reading AI algorithm despite worries that it might be used for surveillance, because it was judged that the potential benefits outweighed the risks.
At EmTech, Walker acknowledged that the council would need to consider emerging AI risks, and he identified misinformation and AI-powered video manipulation as particular concerns. “How do we detect this across our platforms? We are working very hard on this,” he said. “We are a search engine, not a truth engine.”