Why the 'Godfather of AI' Believes Tech Giants Are Missing the Bigger Picture in AI Development

A father of AI
Photo: Getty 
Tech industry leaders often overlook the long-term implications of AI while developing the technology, as noted by computer scientist and Nobel laureate Geoffrey Hinton in an interview with Fortune
‎Their primary focus tends to be on immediate research results and short-term financial gains. Hinton, who is often referred to as the 'godfather of AI,' has consistently raised alarms about the repercussions of advancing AI without clear intentions and safeguards.
Elon Musk envisions a future where AI transforms our lives dramatically: the technology could potentially replace all jobs, while a 'universal high income' would allow everyone to enjoy a theoretical abundance of goods and services. Even if Musk's ambitious vision were to materialize, it would inevitably lead to a significant existential evaluation.
‎'What we will really be grappling with is the question of meaning,' Musk stated during the Viva Technology conference in May 2024. 'If computers and robots can outperform you in every aspect... does your life hold any significance?'
‎However, according to Geoffrey Hinton, most industry leaders are not contemplating this critical question regarding the ultimate implications of AI. In the realm of AI development, Big Tech seems more focused on immediate outcomes rather than the long-term effects of the technology.
‎'For the company owners, the driving force behind the research is short-term profit,' Hinton, a professor emeritus of computer science at the University of Toronto, shared with Fortune.
‎For the developers working on the technology, Hinton explained, the emphasis is similarly placed on the tasks at hand rather than the overarching results of their research.
‎'Researchers are primarily motivated by solving problems that pique their curiosity. It's not as if we all begin with a unified goal of determining the future of humanity,' Hinton remarked.
‎Hinton has consistently cautioned against the perils of AI that lacks proper safeguards and deliberate progression, estimating a 10% to 20% likelihood that this technology could lead to human extinction following the emergence of superintelligence.
‎In 2023—ten years after he sold his neural network firm DNNresearch to Google—Hinton departed from his position at the tech giant, eager to express his concerns about the risks associated with the technology and apprehensive about the inability to stop "bad actors from exploiting it for malicious purposes."
‎Hinton’s overarching view on AI
‎For Hinton, the threats posed by AI can be divided into two main categories: the inherent risks the technology presents to humanity's future, and the repercussions of it being exploited by individuals with harmful intentions.
‎"There’s a significant difference between two types of risk," he remarked. "There’s the danger of malicious individuals misusing AI, which is already a reality. We are witnessing this with instances like counterfeit videos and cyberattacks, and it may soon extend to viruses. This is distinctly different from the risk of AI itself turning into a malevolent force."
‎Financial entities such as Ant International in Singapore, for instance, have raised concerns regarding the rise of deepfakes amplifying the risk of scams or fraudulent activities. 
Tianyi Zhang, the general manager of risk management and cybersecurity at Ant International, informed Fortune that the company discovered over 70% of new registrations in certain markets were likely attempts involving deepfakes.
‎“That problem can probably be solved, but the solution to that problem doesn’t solve the other problems,” he said.
‎For the risk AI itself poses, Hinton believes tech companies need to fundamentally change how they view their relationship to AI. When AI achieves superintelligence, he said, it will not only surpass human capabilities, but have a strong desire to survive and gain additional control. The current framework around AI—that humans can control the technology—will therefore no longer be relevant.
‎Hinton posits AI models need to be imbued with a “maternal instinct” so it can treat the less-powerful humans with sympathy, rather than desire to control them.

Post a Comment

Previous Post Next Post

Contact Form