Nobel laureate physicist 'unnerved' by the AI technology he contributed to
Nobel-winning physicist expresses unease about the AI technology he contributed to developing.
John Hopfield, a professor emeritus at Princeton, alongside co-recipient Geoffrey Hinton, emphasized the necessity for a deeper comprehension of deep-learning systems to avert potential disasters from unchecked progress.
Speaking remotely from Britain to an assembly at the New Jersey university, the 91-year-old Hopfield noted that throughout his lifetime, he has witnessed the emergence of two formidable yet potentially dangerous technologies: biological engineering and nuclear physics.
"One is accustomed to having technologies which are not singularly only good or only bad, but have capabilities in both directions," he remarked.
He expressed his unease as a physicist regarding technologies that lack oversight, stating, "And as a physicist, I'm very unnerved by something which has no control, something which I don't understand well enough so that I can understand what are the limits which one could drive that technology."
"The question AI is pushing," he went on, "that's why I myself, and I think Geoffrey Hinton also, would strongly advocate understanding as an essential need of the field, which is going to develop some abilities that are beyond the abilities you can imagine at present." He described modern AI systems as "absolute marvels," yet pointed out the significant gaps in our understanding of their operations, calling this uncertainty "very, very unnerving."
Hopfield's recognition stems from his creation of the "Hopfield network," a theoretical framework illustrating how artificial neural networks can emulate the memory storage and retrieval functions of biological brains. Hinton built on Hopfield’s model with his "Boltzmann machine," which integrated randomness and laid the groundwork for contemporary AI applications, including image generation.
Hinton has also gained attention for his views on the potential risks associated with AI, which he reiterated during a conference at the University of Toronto, where he holds the title of professor emeritus. "If you look around, there are very few examples of more intelligent things being controlled by less intelligent things," he cautioned, suggesting that as AI surpasses human intelligence, it might seize control.
Amid rapid advancements in AI capabilities and fierce competition among corporations, experts have raised concerns that the technology is progressing more swiftly than researchers can fully grasp. Hopfield highlighted this issue, explaining, "You don't know that the collective properties you began with are actually the collective properties with all the interactions present, and you don't therefore know whether some spontaneous but unwanted thing is lying hidden in the works."
He referenced the fictional "ice-nine," a concept from Kurt Vonnegut's 1963 novel "Cat's Cradle," which was created to assist soldiers but ended up freezing the oceans and leading to civilization's collapse. "I'm worried about anything that says... 'I'm faster than you are, I'm bigger than you are ... can you peacefully inhabit with me?' I don't know, I worry," he said.
Hinton echoed the sentiment of uncertainty, stating that it is currently impossible to devise ways to avoid catastrophic outcomes, emphasizing the urgent need for further research. "That's why we urgently need more research," he asserted. He urged that "our best young researchers, or many of them, should work on AI safety, and governments should force the large companies to provide the computational facilities that they need to do that."
Aarav Patel for TROIB News