‘Godfather of AI’ Issues Fresh Caution to Humanity
The likelihood of artificial intelligence leading to human extinction in the next three decades has risen, according to 'Godfather of AI' Geoffrey Hinton. Read Full Article at RT.com.
In a Thursday interview on BBC Radio 4, Hinton was queried about any changes since his last estimate of a one-in-ten chance of an AI catastrophe. The Turing Award-winning scientist replied, “not really, 10% to 20%.”
This prompted the show's guest editor, former chancellor Sajid Javid, to humorously remark, “you’re going up.” Hinton, who left Google last year, added, “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before.”
Hinton, often referred to as ‘the Godfather of AI,’ emphasized the difficulties in managing advanced AI systems. He posed a thought-provoking question: “How many examples do you know of a more intelligent thing being controlled by a less intelligent thing?...Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”
He illustrated his concerns by suggesting that humanity could be like a three-year-old compared to a future AI that may be “smarter than people.”
Hinton remarked on the rapid advancements in AI being “much faster than I expected” and called for regulatory measures to ensure safety. He warned against relying solely on corporate profit motives, stating, “the only thing that can force those big companies to do more research on safety is government regulation.”
In May 2023, the Center for AI Safety released a statement signed by several leading figures in the field, including Hinton, which emphasized that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The statement garnered support from notable signatories like Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and Yoshua Bengio, a pioneering figure for his work on neural networks.
Hinton believes that AI systems might eventually exceed human intelligence, escape our control, and potentially inflict severe harm on humanity. He advocates for significant investment into AI safety and ethical usage, calling for swift action before it becomes too late.
Conversely, Yann LeCun, Chief AI Scientist at Meta, has shared differing thoughts, suggesting that the technology “could actually save humanity from extinction.”
Alejandro Jose Martinez for TROIB News