AI 'Godfather' Geoffrey Hinton voices grave concerns about AI's future. Explore his warnings on job loss, the dangers of unregulated tech, and the urgent need for safety.
Artificial Intelligence pioneer Geoffrey Hinton has voiced serious concerns about the future of AI. Speaking at the 2026 Digital World Conference, he highlighted the uncertainty surrounding whether humans will be able to coexist with highly advanced, super-intelligent systems. Known as the “Godfather of AI” and a Nobel Prize winner in Physics, Hinton stressed the importance of implementing proper rules and ethical safeguards as the technology continues to evolve rapidly.

Regulation Debate
Hinton pointed out that significant financial resources are being invested to persuade people that regulating AI could hinder innovation. Some argue that regulation acts as a brake, while unregulated AI accelerates progress.
However, Hinton warned that this mindset is dangerous. His expresses his views during discussions at the conference, where experts highlighted the need for global cooperation in shaping AI's impact on society.
Also read: Overreliance on AI Makes You Underconfident At Work, New Study Warns
Social Impact
Participants at the event expressed concern that most AI debates focus heavily on technical progress and business applications, while neglecting broader social effects. Issues such as job losses, inequality, and pressure on public services were highlighted.
Hinton noted that while AI could enhance productivity in sectors like healthcare, it may also replace human workers in other areas.
Job Loss Concerns
He explained that jobs in industries such as call centres are already being performed as well as, or even better than, by AI systems.
In the future, he believes many intellectual roles could also be handled by machines. Even if new jobs are created, AI may perform them more cheaply, making it difficult for workers to compete despite retraining efforts.
Future Risks
Hinton, who left Google in 2023 due to concerns about AI risks, said the technology is advancing at an extremely fast pace. He warned that humans currently have control, but must ensure AI is developed in a way that allows both humans and AI to coexist safely.
Safety Gap
He also highlighted a major imbalance in research efforts, estimating that only a small fraction of AI work focuses on safety. According to Hinton, this lack of attention to long-term risks makes the current situation deeply concerning and calls for immediate action.
Also read: Is ChatGPT Changing How We Think? MIT, Stanford Study Raises Red Flags


