Geoffrey Hinton, often hailed as the “godfather of AI,” has openly expressed reservations about Sam Altman’s leadership of OpenAI, criticising Altman for focusing more on profits than safety. Hinton, who recently won the 2024 Nobel Prize in Physics, mentioned his pride in one of his former students, Ilya Sutskever, for playing a pivotal role in Altman’s temporary removal from OpenAI in November 2023.
Hinton’s concerns are rooted in his long-standing commitment to the ethical development of AI. In 2009, Hinton demonstrated the potential of Nvidia’s CUDA platform by training a neural network to recognise human speech, an achievement that contributed to the wider use of GPUs in AI research. His research group at the University of Toronto continued to push the boundaries of machine learning, ultimately developing a neural network in 2012 with students Ilya Sutskever and Alex Krizhevsky. This network was capable of identifying everyday objects like flowers, dogs, and cars by analysing thousands of images. This breakthrough validated the use of GPUs in AI, and soon, competitors were adopting neural networks powered by GPUs across the board.
Sutskever’s influence extended well beyond his academic accomplishments. As a co-founder and chief scientist at OpenAI, his technical leadership helped shape some of the organisation’s most advanced AI models. However, after OpenAI’s board ousted Altman as CEO in late 2023, Sutskever initially supported the decision, only to later regret his actions and join others in advocating for Altman’s reinstatement. Sutskever eventually left OpenAI in May 2024 to start his own AI venture, Safe Superintelligence Inc.
Hinton, who supervised Sutskever during his Ph.D. at the University of Toronto, reflected on OpenAI’s original mission, which was heavily focused on ensuring the safety of artificial general intelligence (AGI). Over time, however, he observed a shift under Altman’s leadership towards a profit-driven approach, a change Hinton views as detrimental to the organisation’s core principles.
Beyond his critique of OpenAI, Hinton has long warned about the dangers AI poses to society. He has expressed concerns that AI systems, by learning from vast amounts of digital text and media, could become more adept at manipulating humans than many realise. Initially, Hinton believed that AI systems were far inferior to the human brain in terms of understanding language, but as these systems began processing larger datasets, he reconsidered his stance. Now, Hinton believes AI may be surpassing human intelligence in some respects, which he finds deeply unsettling.
As AI technology rapidly advances, Hinton fears the implications for society. He has warned that the internet could soon be flooded with AI-generated false information, leaving the average person unable to discern what is real. He is also concerned about AI’s potential impact on the job market, suggesting that while chatbots like ChatGPT currently complement human workers, they could eventually replace roles such as paralegals, personal assistants, and translators.
Hinton’s greatest concern lies in the long-term risks AI poses, particularly the possibility of AI systems exhibiting unexpected behaviour as they process and analyse vast amounts of data. He has expressed fears that autonomous AI systems could be developed to run their own code, potentially leading to the creation of autonomous weapons, or “killer robots.” Once dismissing such risks as distant, Hinton now believes they are much closer than previously thought, estimating they could materialise within the next few decades.
Other experts, including many of Hinton’s students and colleagues, have described these concerns as hypothetical. Nonetheless, Hinton is worried that the current competition between tech giants like Google and Microsoft could spiral into a global AI arms race, one that would be difficult to regulate. Unlike nuclear weapons, AI research can easily be conducted in secret, making regulation and oversight much harder. Hinton believes that the best hope for mitigating these risks lies in collaboration among the world’s top scientists to devise methods of controlling AI. Until then, he argues, further development of these systems should be paused.
Hinton’s concerns about Altman’s leadership are not unique. Elon Musk, another co-founder of OpenAI, has been a prominent critic of Altman, particularly regarding OpenAI’s transition from a nonprofit to a for-profit organisation. Musk has repeatedly pointed out that this shift runs counter to the company’s original purpose of being an open-source, nonprofit initiative to counterbalance other tech giants.
As the AI race continues, Hinton’s warnings underscore the growing divide between technological advancement and ethical responsibility, with OpenAI and its leadership at the centre of this tension.