When Geoffrey Hinton had an ethical objection to his employer Google working with the US military in 2018, he did not join public protests or put his name to an open letter of complaint signed by more than 4,000 of his employees. colleague
Instead, he just talked to Sergey Brin, the co-founder of Google. “He said he was a little upset about it too. And so they don’t pursue it,” Hinton said in an interview at the time.
The incident is symbolic of Hinton’s quiet influence in the world of artificial intelligence. The 75-year-old professor is revered as one of the “godfathers” of AI due to his pioneering work in deep learning – an area of AI that has driven major developments in the sector.
But the anecdote also reflects Hinton’s honesty, according to those who know him well. In principle, he has never voiced any corporate grievances, ethical or otherwise, publicly.
This belief led him to quit his role as vice-president and engineering fellow at Google last week, so he could speak more freely about his growing fears about AI’s dangers to humanity.
Yoshua Bengio, his longtime colleague and friend, who won the Turing Prize with Hinton and Yann LeCun in 2018, said he saw the resignation coming. “He could have stayed at Google and spoken out, but his sense of honesty wouldn’t have done it,” Bengio said.
Hinton’s resignation follows a series of groundbreaking AI launches over the past six months, starting with Microsoft-backed OpenAI’s ChatGPT in November and Google’s own chatbot, Bard, in March.
Hinton expressed concerns that the race between Microsoft and Google will push AI development without appropriate guardrails and regulations in place.
“I think Google was very responsible from the start,” he said in a speech at an EmTech Digital event on Wednesday, after his resignation was made public. “Once OpenAI has done similar things with . . . money from Microsoft, and Microsoft decides to put it there, then Google doesn’t have much of a choice. If you live in a capitalist system, you can’t stop Google to compete with Microsoft.”
Since the 1970s, Hinton has pioneered the development of “neural networks”, technology that tries to mimic how the brain works. It now powers most of the AI tools and products we use today, from Google Translate and Bard, to ChatGPT and autonomous cars.
But this week, he acknowledged fears about its rapid development, which could result in misinformation flooding the public sphere and AI usurping more human jobs than predicted.
“What I’m worried about is that it will happen [make] the rich are richer and the poor are poorer. While you are doing that. . . society is becoming more violent,” Hinton said. “This technology that is supposed to be amazing . . . was built into a society that was not designed to use it for the good of all.”
Hinton also sounded the alarm bell about the long-term threats AI systems pose to humans, if the technology is given too much autonomy. He had always believed this existential danger to be remote, but recently his thinking had recalibrated to its urgency.
“It is very conceivable that humanity is a passing stage in the evolution of intelligence,” he said. Hinton’s decision to leave Google after a decade was spurred by an academic colleague who convinced him to speak out, he added.
Born in London, Hinton came from a distinguished family of scientists. He is the great-grandson of British mathematicians Mary and George Boole, the latter of whom invented Boolean logic, the theory that underlies modern computing.
As a cognitive psychologist, Hinton’s work on AI aims to approximate human intelligence — not just to develop AI technology but to explain the workings of our own brains.
Stuart Russell, an AI professor at the University of California, Berkeley, an academic peer of Hinton’s, said his background meant he was “not the most mathematical of people you’ll find in the machine-learning community.” .
He points to Hinton’s breakthrough in 1986, when he published a paper about a technique called “backpropagation”, which showed how computer software can learn over time.
“It’s obviously an important role,” Russell said. “But he didn’t get the . . . rule as a mathematician would. He used his intuition to figure out a way that would work.”
Hinton was not always outspoken in public about his ethical views but in private he made them clear.
In 1987, when he was an associate professor at Carnegie Mellon University in the US, he decided to leave his position and move to Canada.
One of the reasons he gave, according to Bengio, was an ethical one — he was concerned about the use of technology, particularly AI, in war and most of his funding came from the US military.
“He wants to feel good about the funding he got and the work he’s doing,” Bengio said. “He and I share values about society. That people matter, that the dignity of all people matters. And everyone should benefit from the progress that science creates.”
In 2012, Hinton and his two graduate students at the University of Toronto — along with Ilya Sutskever, now a co-founder of OpenAI — made a major breakthrough in the field of computer vision. They have built neural networks that can recognize objects in images orders of magnitude more accurately than was previously possible. Based on this work, they founded their first start-up, DNNresearch.
Their company — which did not make any products — was sold to Google for $44mn in 2013, after a competitive auction led to China’s Baidu, Microsoft and DeepMind bidding to acquire the trio’s expertise.
Since then, Hinton has spent half his time at Google, and the other half as a professor at the University of Toronto.
According to Russell, Hinton is constantly coming up with new ideas and trying new things. “Whenever he had a new idea, he would say at the end of his talk: ‘And this is how the brain works!’”
When asked on stage if he regretted his life’s work, as it might have contributed to the many injuries he outlined, Hinton said he thought about it.
“This phase of [AI] is unpredictable. And until recently I thought that this existential crisis was far away,” he said. “So I don’t really regret what I did.”